title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 59. Managing replication topology
Chapter 59. Managing replication topology This chapter describes how to manage replication between servers in an Identity Management (IdM) domain. Additional resources Planning the replica topology Uninstalling an IdM server Failover, load-balancing, and high-availability in IdM 59.1. Explaining replication agreements, topology suffixes and topology segments When you create a replica, Identity Management (IdM) creates a replication agreement between the initial server and the replica. The data that is replicated is then stored in topology suffixes and when two replicas have a replication agreement between their suffixes, the suffixes form a topology segment. These concepts are explained in more detail in the following sections: Replication agreements Topology suffixes Topology segments 59.1.1. Replication agreements between IdM replicas When an administrator creates a replica based on an existing server, Identity Management (IdM) creates a replication agreement between the initial server and the replica. The replication agreement ensures that the data and configuration is continuously replicated between the two servers. IdM uses multiple read/write replica replication . In this configuration, all replicas joined in a replication agreement receive and provide updates, and are therefore considered suppliers and consumers. Replication agreements are always bilateral. Figure 59.1. Server and replica agreements IdM uses two types of replication agreements: Domain replication agreements replicate the identity information. Certificate replication agreements replicate the certificate information. Both replication channels are independent. Two servers can have one or both types of replication agreements configured between them. For example, when server A and server B have only domain replication agreement configured, only identity information is replicated between them, not the certificate information. 59.1.2. Topology suffixes Topology suffixes store the data that is replicated. IdM supports two types of topology suffixes: domain and ca . Each suffix represents a separate server, a separate replication topology. When a replication agreement is configured, it joins two topology suffixes of the same type on two different servers. The domain suffix: dc= example ,dc= com The domain suffix contains all domain-related data. When two replicas have a replication agreement between their domain suffixes, they share directory data, such as users, groups, and policies. The ca suffix: o=ipaca The ca suffix contains data for the Certificate System component. It is only present on servers with a certificate authority (CA) installed. When two replicas have a replication agreement between their ca suffixes, they share certificate data. Figure 59.2. Topology suffixes An initial topology replication agreement is set up between two servers by the ipa-replica-install script when installing a new replica. Example 59.1. Viewing topology suffixes The ipa topologysuffix-find command displays a list of topology suffixes: 59.1.3. Topology segments When two replicas have a replication agreement between their suffixes, the suffixes form a topology segment . Each topology segment consists of a left node and a right node . The nodes represent the servers joined in the replication agreement. Topology segments in IdM are always bidirectional. Each segment represents two replication agreements: from server A to server B, and from server B to server A. The data is therefore replicated in both directions. Figure 59.3. Topology segments Example 59.2. Viewing topology segments The ipa topologysegment-find command shows the current topology segments configured for the domain or CA suffixes. For example, for the domain suffix: In this example, domain-related data is only replicated between two servers: server1.example.com and server2.example.com . To display details for a particular segment only, use the ipa topologysegment-show command: 59.2. Using the topology graph to manage replication topology The topology graph in the web UI shows the relationships between the servers in the domain. Using the Web UI, you can manipulate and transform the representation of the topology. Accessing the topology graph To access the topology graph: Select IPA Server Topology Topology Graph . If you make any changes to the topology that are not immediately reflected in the graph, click Refresh . Interpreting the topology graph Servers joined in a domain replication agreement are connected by an orange arrow. Servers joined in a CA replication agreement are connected by a blue arrow. Topology graph example: recommended topology The recommended topology example below shows one of the possible recommended topologies for four servers: each server is connected to at least two other servers, and more than one server is a CA server. Figure 59.4. Recommended topology example Topology graph example: discouraged topology In the discouraged topology example below, server1 is a single point of failure. All the other servers have replication agreements with this server, but not with any of the other servers. Therefore, if server1 fails, all the other servers will become isolated. Avoid creating topologies like this. Figure 59.5. Discouraged topology example: Single Point of Failure Customizing the topology view You can move individual topology nodes by holding and dragging the mouse: Figure 59.6. Moving topology graph nodes You can zoom in and zoom out the topology graph using the mouse wheel: Figure 59.7. Zooming the topology graph You can move the canvas of the topology graph by holding the left mouse button: Figure 59.8. Moving the topology graph canvas 59.3. Setting up replication between two servers using the Web UI Using the Identity Management (IdM) Web UI, you can choose two servers and create a new replication agreement between them. Prerequisites You are logged in as an IdM administrator. Procedure In the topology graph, hover your mouse over one of the server nodes. Figure 59.9. Domain or CA options Click on the domain or the ca part of the circle depending on what type of topology segment you want to create. A new arrow representing the new replication agreement appears under your mouse pointer. Move your mouse to the other server node, and click on it. Figure 59.10. Creating a new segment In the Add topology segment window, click Add to confirm the properties of the new segment. The new topology segment between the two servers joins them in a replication agreement. The topology graph now shows the updated replication topology: Figure 59.11. New segment created 59.4. Stopping replication between two servers using the Web UI Using the Identity Management (IdM) Web UI, you can remove a replication agreement from servers. Prerequisites You are logged in as an IdM administrator. Procedure Click on an arrow representing the replication agreement you want to remove. This highlights the arrow. Figure 59.12. Topology segment highlighted Click Delete . In the Confirmation window, click OK . IdM removes the topology segment between the two servers, which deletes their replication agreement. The topology graph now shows the updated replication topology: Figure 59.13. Topology segment deleted 59.5. Setting up replication between two servers using the CLI You can configure replication agreements between two servers using the ipa topologysegment-add command. Prerequisites You have the IdM administrator credentials. Procedure Create a topology segment for the two servers. When prompted, provide: The required topology suffix: domain or ca The left node and the right node, representing the two servers [Optional] A custom name for the segment For example: Adding the new segment joins the servers in a replication agreement. Verification Verify that the new segment is configured: 59.6. Stopping replication between two servers using the CLI You can terminate replication agreements from command line using the ipa topology segment-del command. Prerequisites You have the IdM administrator credentials. Procedure [Optional] If you do not know the name of the specific replication segment that you want to remove, display all segments available. Use the ipa topologysegment-find command. When prompted, provide the required topology suffix: domain or ca . For example: Locate the required segment in the output. Remove the topology segment joining the two servers: Deleting the segment removes the replication agreement. Verification Verify that the segment is no longer listed: 59.7. Removing server from topology using the Web UI You can use Identity Management (IdM) web interface to remove a server from the topology. This action does not uninstall the server components from the host. Prerequisites You are logged in as an IdM administrator. The server you want to remove is not the only server connecting other servers with the rest of the topology; this would cause the other servers to become isolated, which is not allowed. The server you want to remove is not your last CA or DNS server. Warning Removing a server is an irreversible action. If you remove a server, the only way to introduce it back into the topology is to install a new replica on the machine. Procedure Select IPA Server Topology IPA Servers . Click on the name of the server you want to delete. Figure 59.14. Selecting a server Click Delete Server . Additional resources Uninstalling an IdM server 59.8. Removing obsolete RUV records If you remove a server from the IdM topology without properly removing its replication agreements, obsolete replica update vector (RUV) records will remain on one or more remaining servers in the topology. This can happen, for example, due to automation. These servers will then expect to receive updates from the now removed server. In this case, you need to clean the obsolete RUV records from the remaining servers. Prerequisites You have the IdM administrator credentials. You know which replicas are corrupted or have been improperly removed. Procedure List the details about RUVs using the ipa-replica-manage list-ruv command. The command displays the replica IDs: Important The ipa-replica-manage list-ruv command lists ALL replicas in the topology, not only the malfunctioning or improperly removed ones. Remove obsolete RUVs associated with a specified replica using the ipa-replica-manage clean-ruv command. Repeat the command for every replica ID with obsolete RUVs. For example, if you know server1.example.com and server2.example.com are the malfunctioning or improperly removed replicas: Warning Proceed with extreme caution when using ipa-replica-manage clean-ruv . Running the command against a valid replica ID will corrupt all the data associated with that replica in the replication database. If this happens, re-initialize the replica from another replica using USD ipa-replica-manage re-initialize --from server1.example.com . Verification Run ipa-replica-manage list-ruv again. If the command no longer displays any corrupt RUVs, the records have been successfully cleaned. If the command still displays corrupt RUVs, clear them manually using this task: 59.9. Viewing available server roles in the IdM topology using the IdM Web UI Based on the services installed on an IdM server, it can perform various server roles . For example: CA server DNS server Key recovery authority (KRA) server. Procedure For a complete list of the supported server roles, see IPA Server Topology Server Roles . Note Role status absent means that no server in the topology is performing the role. Role status enabled means that one or more servers in the topology are performing the role. Figure 59.15. Server roles in the web UI 59.10. Viewing available server roles in the IdM topology using the IdM CLI Based on the services installed on an IdM server, it can perform various server roles . For example: CA server DNS server Key recovery authority (KRA) server. Procedure To display all CA servers in the topology and the current CA renewal server: Alternatively, to display a list of roles enabled on a particular server, for example server.example.com : Alternatively, use the ipa server-find --servrole command to search for all servers with a particular server role enabled. For example, to search for all CA servers: 59.11. Promoting a replica to a CA renewal server and CRL publisher server If your IdM deployment uses an embedded certificate authority (CA), one of the IdM CA servers acts as the CA renewal server, a server that manages the renewal of CA subsystem certificates. One of the IdM CA servers also acts as the IdM CRL publisher server, a server that generates certificate revocation lists. By default, the CA renewal server and CRL publisher server roles are installed on the first server on which the system administrator installed the CA role using the ipa-server-install or ipa-ca-install command. You can, however, transfer either of the two roles to any other IdM server on which the CA role is enabled. Prerequisites You have the IdM administrator credentials. Procedure Change the current CA renewal server. Configure a replica to generate CRLs. 59.12. Demoting or promoting hidden replicas After a replica has been installed, you can configure whether the replica is hidden or visible. For details about hidden replicas, see The hidden replica mode . Prerequisites Ensure that the replica is not the DNSSEC key master. If it is, move the service to another replica before making this replica hidden. Ensure that the replica is not a CA renewal server. If it is, move the service to another replica before making this replica hidden. For details, see Changing and resetting IdM CA renewal server . Procedure To hide a replica: To make a replica visible again: To view a list of all the hidden replicas in your topology: If all of your replicas are enabled, the command output does not mention hidden replicas.
[ "ipa topologysuffix-find --------------------------- 2 topology suffixes matched --------------------------- Suffix name: ca Managed LDAP suffix DN: o=ipaca Suffix name: domain Managed LDAP suffix DN: dc=example,dc=com ---------------------------- Number of entries returned 2 ----------------------------", "ipa topologysegment-find Suffix name: domain ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------", "ipa topologysegment-show Suffix name: domain Segment name: server1.example.com-to-server2.example.com Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-add Suffix name: domain Left node: server1.example.com Right node: server2.example.com Segment name [server1.example.com-to-server2.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-show Suffix name: domain Segment name: new_segment Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-find Suffix name: domain ------------------ 8 segments matched ------------------ Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 8 ----------------------------", "ipa topologysegment-del Suffix name: domain Segment name: new_segment ----------------------------- Deleted segment \"new_segment\" -----------------------------", "ipa topologysegment-find Suffix name: domain ------------------ 7 segments matched ------------------ Segment name: server2.example.com-to-server3.example.com Left node: server2.example.com Right node: server3.example.com Connectivity: both ---------------------------- Number of entries returned 7 ----------------------------", "ipa-replica-manage list-ruv server1.example.com:389: 6 server2.example.com:389: 5 server3.example.com:389: 4 server4.example.com:389: 12", "ipa-replica-manage clean-ruv 6 ipa-replica-manage clean-ruv 5", "dn: cn=clean replica_ID, cn=cleanallruv, cn=tasks, cn=config objectclass: extensibleObject replica-base-dn: dc=example,dc=com replica-id: replica_ID replica-force-cleaning: no cn: clean replica_ID", "ipa config-show IPA masters: server1.example.com, server2.example.com, server3.example.com IPA CA servers: server1.example.com, server2.example.com IPA CA renewal master: server1.example.com", "ipa server-show Server name: server.example.com Enabled server roles: CA server, DNS server, KRA server", "ipa server-find --servrole \"CA server\" --------------------- 2 IPA servers matched --------------------- Server name: server1.example.com Server name: server2.example.com ---------------------------- Number of entries returned 2 ----------------------------", "ipa server-state replica.idm.example.com --state=hidden", "ipa server-state replica.idm.example.com --state=enabled", "ipa config-show" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/assembly_managing-replication-topology_configuring-and-managing-idm
Chapter 19. Using the system health dashboard
Chapter 19. Using the system health dashboard The Red Hat Advanced Cluster Security for Kubernetes system health dashboard provides a single interface for viewing health related information about Red Hat Advanced Cluster Security for Kubernetes components. Note The system health dashboard is only available on Red Hat Advanced Cluster Security for Kubernetes 3.0.53 and newer. 19.1. System health dashboard details To access the health dashboard: In the RHACS portal, go to Platform Configuration System Health . The health dashboard organizes information in the following groups: Cluster Health - Shows the overall state of Red Hat Advanced Cluster Security for Kubernetes cluster. Vulnerability Definitions - Shows the last update time of vulnerability definitions. Image Integrations - Shows the health of all registries that you have integrated. Notifier Integrations - Shows the health of any notifiers (Slack, email, Jira, or other similar integrations) that you have integrated. Backup Integrations - Shows the health of any backup providers that you have integrated. The dashboard lists the following states for different components: Healthy - The component is functional. Degraded - The component is partially unhealthy. This state means the cluster is functional, but some components are unhealthy and require attention. Unhealthy - This component is not healthy and requires immediate attention. Uninitialized - The component has not yet reported back to Central to have its health assessed. An uninitialized state may sometimes require attention, but often components report back the health status after a few minutes or when the integration is used. Cluster health section The Cluster Overview shows information about your Red Hat Advanced Cluster Security for Kubernetes cluster health. It reports the health information about the following: Collector Status - It shows whether the Collector pod that Red Hat Advanced Cluster Security for Kubernetes uses is reporting healthy. Sensor Status - It shows whether the Sensor pod that Red Hat Advanced Cluster Security for Kubernetes uses is reporting healthy. Sensor Upgrade - It shows whether the Sensor is running the correct version when compared with Central. Credential Expiration - It shows if the credentials for Red Hat Advanced Cluster Security for Kubernetes are nearing expiration. Note Clusters in the Uninitialized state are not reported in the number of clusters secured by Red Hat Advanced Cluster Security for Kubernetes until they check in. Vulnerabilities definition section The Vulnerabilities Definition section shows the last time vulnerability definitions were updated and if the definitions are up to date. Integrations section There are 3 integration sections Image Integrations , Notifier Integrations , and Backup Integrations . Similar to the Cluster Health section, these sections list the number of unhealthy integrations if they exist. Otherwise, all integrations report as healthy. Note The Integrations section lists the healthy integrations as 0 if any of the following conditions are met: You have not integrated Red Hat Advanced Cluster Security for Kubernetes with any third-party tools. You have integrated with some tools, but disabled the integrations, or have not set up any policy violations. 19.2. Viewing product usage data RHACS provides product usage data for the number of secured Kubernetes nodes and CPU units for secured clusters based on metrics collected from RHACS sensors. This information can be useful to estimate RHACS consumption data for reporting. For more information on how CPU units are defined in Kubernetes, see CPU resource units . Note OpenShift Container Platform provides its own usage reports; this information is intended for use with self-managed Kubernetes systems. RHACS provides the following usage data in the web portal and API: Currently secured CPU units: The number of Kubernetes CPU units used by your RHACS secured clusters, as of the latest metrics collection. Currently secured node count: The number of Kubernetes nodes secured by RHACS, as of the latest metrics collection. Maximum secured CPU units: The maximum number of CPU units used by your RHACS secured clusters, as measured hourly and aggregated for the time period defined by the Start date and End date . Maximum secured node count: The maximum number of Kubernetes nodes secured by RHACS, as measured hourly and aggregated for the time period defined by the Start date and End date . CPU units observation date: The date on which the maximum secured CPU units data was collected. Node count observation date: The date on which the maximum secured node count data was collected. The sensors collect data every 5 minutes, so there can be a short delay in displaying the current data. To view historical data, you must configure the Start date and End date and download the data file. The date range is inclusive and depends on your time zone. The presented maximum values are computed based on hourly maximums for the requested period. The hourly maximums are available for download in CSV format. Note The data shown is not sent to Red Hat or displayed as Prometheus metrics. Procedure In the RHACS portal, go to Platform Configuration System Health . Click Show product usage . In the Start date and End date fields, choose the dates for which you want to display data. This range is inclusive and depends on your time zone. Optional: To download the detailed data, click Download CSV . You can also obtain this data by using the ProductUsageService API object. For more information, go to Help API reference in the RHACS portal. 19.3. Generating a diagnostic bundle by using the RHACS portal You can generate a diagnostic bundle by using the system health dashboard in the RHACS portal. Prerequisites To generate a diagnostic bundle, you need read permission for the Administration resource. Procedure In the RHACS portal, select Platform Configuration System Health . On the System Health view header, click Generate Diagnostic Bundle . For the Filter by clusters drop-down menu, select the clusters for which you want to generate the diagnostic data. For Filter by starting time , specify the date and time (in UTC format) from which you want to include the diagnostic data. Click Download Diagnostic Bundle . 19.3.1. Additional resources Generating a diagnostic bundle
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/use-system-health-dashboard
11.11. Aggregate Function Options
11.11. Aggregate Function Options Property Data Type or Allowed Values Description ANALYTIC 'TRUE'|'FALSE' indicates the aggregate function must be windowed. default false. ALLOWS-ORDERBY 'TRUE'|'FALSE' indicates the aggregate function supports an ORDER BY clause. default false ALLOWS-DISTINCT 'TRUE'|'FALSE' indicates the aggregate function supports the DISTINCT keyword. default false DECOMPOSABLE 'TRUE'|'FALSE' indicates the single argument aggregate function can be decomposed as agg(agg(x) ) over subsets of data. default false USES-DISTINCT-ROWS 'TRUE'|'FALSE' indicates the aggregate function effectively uses distinct rows rather than all rows. default false Note that virtual functions defined using the Teiid procedure language cannot be aggregate functions. Note If you have defined a UDF (virtual) function without a Teiid procedure definition, then it must be accompanied by its implementation in Java. To configure the Java library as dependency to the VDB, see Support for User-Defined Functions in Red Hat JBoss Data Virtualization Development Guide: Server Development .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/aggregate_function_options
8.129. lsvpd
8.129. lsvpd 8.129.1. RHBA-2014:1442 - lsvpd bug fix and enhancement update Updated lsvpd packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The lsvpd packages provide a set of tools to gather and display the Vital Product Data (VPD) information about hardware components. This information can be used by higher-level serviceability tools. This update also fixes the following bugs: Note The lsvpd packages has been upgraded to upstream version 1.7.4, which provides a number of bug fixes and enhancements over the version. (BZ# 739121 ) This update also fixes the following bugs: Bug Fixes BZ# 868757 Previously, the output from the lscfg command contained duplicate entries for various hardware components. This bug has been fixed and lscfg no longer returns duplicate entries. BZ# 1088401 Previously, it was not possible to link code between the libsvpd and librtas libraries, because libsvpd is distributed under the GNU General Public License (GPL) whereas librtas is under commercial public license (CPL). This update grants a special permission to link part of the code for libsvpd against the librtas library and distribute linked combinations which include both libraries. You must obey the GNU General Public License in all respects for all of the code used other than librtas. The lsvpd packages has been upgraded to upstream version 1.7.4, which provides a number of bug fixes and enhancements over the version. (BZ#739121) In addition, this update adds the following Enhancement BZ# 1006855 This update adds support for the Firmware Entitlement Checking on IBM PowerPC server systems. Users of lsvpd are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/lsvpd
Chapter 14. ImageTagMirrorSet [config.openshift.io/v1]
Chapter 14. ImageTagMirrorSet [config.openshift.io/v1] Description ImageTagMirrorSet holds cluster-wide information about how to handle registry mirror rules on using tag pull specification. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status contains the observed state of the resource. 14.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description imageTagMirrors array imageTagMirrors allows images referenced by image tags in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageTagMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using digest specification only, users should configure a list of mirrors using "ImageDigestMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagetagmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a deterministic order of mirrors, should configure them into one list of mirrors using the expected order. imageTagMirrors[] object ImageTagMirrors holds cluster-wide information about how to handle mirrors in the registries config. 14.1.2. .spec.imageTagMirrors Description imageTagMirrors allows images referenced by image tags in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageTagMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using digest specification only, users should configure a list of mirrors using "ImageDigestMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagetagmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a deterministic order of mirrors, should configure them into one list of mirrors using the expected order. Type array 14.1.3. .spec.imageTagMirrors[] Description ImageTagMirrors holds cluster-wide information about how to handle mirrors in the registries config. Type object Required source Property Type Description mirrorSourcePolicy string mirrorSourcePolicy defines the fallback policy if fails to pull image from the mirrors. If unset, the image will continue to be pulled from the repository in the pull spec. sourcePolicy is valid configuration only when one or more mirrors are in the mirror list. mirrors array (string) mirrors is zero or more locations that may also contain the same images. No mirror will be configured if not specified. Images can be pulled from these mirrors only if they are referenced by their tags. The mirrored location is obtained by replacing the part of the input reference that matches source by the mirrors entry, e.g. for registry.redhat.io/product/repo reference, a (source, mirror) pair *.redhat.io, mirror.local/redhat causes a mirror.local/redhat/product/repo repository to be used. Pulling images by tag can potentially yield different images, depending on which endpoint we pull from. Configuring a list of mirrors using "ImageDigestMirrorSet" CRD and forcing digest-pulls for mirrors avoids that issue. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. If no mirror is specified or all image pulls from the mirror list fail, the image will continue to be pulled from the repository in the pull spec unless explicitly prohibited by "mirrorSourcePolicy". Other cluster configuration, including (but not limited to) other imageTagMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. "mirrors" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table source string source matches the repository that users refer to, e.g. in image pull specifications. Setting source to a registry hostname e.g. docker.io. quay.io, or registry.redhat.io, will match the image pull specification of corressponding registry. "source" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo [*.]host for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table 14.1.4. .status Description status contains the observed state of the resource. Type object 14.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/imagetagmirrorsets DELETE : delete collection of ImageTagMirrorSet GET : list objects of kind ImageTagMirrorSet POST : create an ImageTagMirrorSet /apis/config.openshift.io/v1/imagetagmirrorsets/{name} DELETE : delete an ImageTagMirrorSet GET : read the specified ImageTagMirrorSet PATCH : partially update the specified ImageTagMirrorSet PUT : replace the specified ImageTagMirrorSet /apis/config.openshift.io/v1/imagetagmirrorsets/{name}/status GET : read status of the specified ImageTagMirrorSet PATCH : partially update status of the specified ImageTagMirrorSet PUT : replace status of the specified ImageTagMirrorSet 14.2.1. /apis/config.openshift.io/v1/imagetagmirrorsets Table 14.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ImageTagMirrorSet Table 14.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageTagMirrorSet Table 14.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.5. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSetList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageTagMirrorSet Table 14.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.7. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.8. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 202 - Accepted ImageTagMirrorSet schema 401 - Unauthorized Empty 14.2.2. /apis/config.openshift.io/v1/imagetagmirrorsets/{name} Table 14.9. Global path parameters Parameter Type Description name string name of the ImageTagMirrorSet Table 14.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageTagMirrorSet Table 14.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 14.12. Body parameters Parameter Type Description body DeleteOptions schema Table 14.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageTagMirrorSet Table 14.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 14.15. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageTagMirrorSet Table 14.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 14.17. Body parameters Parameter Type Description body Patch schema Table 14.18. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageTagMirrorSet Table 14.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.20. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.21. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 401 - Unauthorized Empty 14.2.3. /apis/config.openshift.io/v1/imagetagmirrorsets/{name}/status Table 14.22. Global path parameters Parameter Type Description name string name of the ImageTagMirrorSet Table 14.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ImageTagMirrorSet Table 14.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 14.25. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageTagMirrorSet Table 14.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 14.27. Body parameters Parameter Type Description body Patch schema Table 14.28. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageTagMirrorSet Table 14.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.30. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.31. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/imagetagmirrorset-config-openshift-io-v1
Chapter 2. Performing rolling upgrades for Data Grid Server clusters
Chapter 2. Performing rolling upgrades for Data Grid Server clusters Perform rolling upgrades of your Data Grid clusters to change between versions without downtime or data loss and migrate data over the Hot Rod protocol. 2.1. Setting up target Data Grid clusters Create a cluster that uses the Data Grid version to which you plan to upgrade and then connect the source cluster to the target cluster using a remote cache store. Prerequisites Install Data Grid Server nodes with the desired version for your target cluster. Important Ensure the network properties for the target cluster do not overlap with those for the source cluster. You should specify unique names for the target and source clusters in the JGroups transport configuration. Depending on your environment you can also use different network interfaces and port offsets to separate the target and source clusters. Procedure Create a remote cache store configuration, in JSON format, that allows the target cluster to connect to the source cluster. Remote cache stores on the target cluster use the Hot Rod protocol to retrieve data from the source cluster. { "remote-store": { "cache": "myCache", "shared": true, "raw-values": true, "security": { "authentication": { "digest": { "username": "username", "password": "changeme", "realm": "default" } } }, "remote-server": [ { "host": "127.0.0.1", "port": 12222 } ] } } Use the Data Grid Command Line Interface (CLI) or REST API to add the remote cache store configuration to the target cluster so it can connect to the source cluster. CLI: Use the migrate cluster connect command on the target cluster. REST API: Invoke a POST request that includes the remote store configuration in the payload with the rolling-upgrade/source-connection method. Repeat the preceding step for each cache that you want to migrate. Switch clients over to the target cluster, so it starts handling all requests. Update client configuration with the location of the target cluster. Restart clients. Important If you need to migrate Indexed caches you must first migrate the internal ___protobuf_metadata cache so that the .proto schemas defined on the source cluster will also be present on the target cluster. Additional resources Remote cache store configuration schema 2.2. Synchronizing data to target clusters When you set up a target Data Grid cluster and connect it to a source cluster, the target cluster can handle client requests using a remote cache store and load data on demand. To completely migrate data to the target cluster, so you can decommission the source cluster, you can synchronize data. This operation reads data from the source cluster and writes it to the target cluster. Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data. You must perform the synchronization for each cache that you want to migrate to the target cluster. Prerequisites Set up a target cluster with the appropriate Data Grid version. Procedure Start synchronizing each cache that you want to migrate to the target cluster with the Data Grid Command Line Interface (CLI) or REST API. CLI: Use the migrate cluster synchronize command. REST API: Use the ?action=sync-data parameter with a POST request. When the operation completes, Data Grid responds with the total number of entries copied to the target cluster. Disconnect each node in the target cluster from the source cluster. CLI: Use the migrate cluster disconnect command. REST API: Invoke a DELETE request. steps After you synchronize all data from the source cluster, the rolling upgrade process is complete. You can now decommission the source cluster.
[ "{ \"remote-store\": { \"cache\": \"myCache\", \"shared\": true, \"raw-values\": true, \"security\": { \"authentication\": { \"digest\": { \"username\": \"username\", \"password\": \"changeme\", \"realm\": \"default\" } } }, \"remote-server\": [ { \"host\": \"127.0.0.1\", \"port\": 12222 } ] } }", "[//containers/default]> migrate cluster connect -c myCache --file=remote-store.json", "POST /rest/v2/caches/myCache/rolling-upgrade/source-connection", "migrate cluster synchronize -c myCache", "POST /rest/v2/caches/myCache?action=sync-data", "migrate cluster disconnect -c myCache", "DELETE /rest/v2/caches/myCache/rolling-upgrade/source-connection" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/upgrading_data_grid/rolling-upgrades
probe::ipmib.OutRequests
probe::ipmib.OutRequests Name probe::ipmib.OutRequests - Count a request to send a packet Synopsis ipmib.OutRequests Values skb pointer to the struct sk_buff being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global OutRequests (equivalent to SNMP's MIB IPSTATS_MIB_OUTREQUESTS)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-outrequests
Chapter 3. Enhancements
Chapter 3. Enhancements This section describes the major enhancements introduced in Red Hat OpenShift Data foundation 4.9. Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph Manager hung on checking pool permissions while reading the configuration file. The Ceph Metadata Server (MDS) did not allow write operations to occur when the Ceph OSD was full, resulting in an ENOSPACE error. When the storage cluster hit full ratio, users could not delete data to free space using the Ceph Manager volume plugin. With this release, the new FULL capability is introduced. With the FULL capability, the Ceph Manager bypasses the Ceph OSD full check. The client_check_pool_permission option is disabled by default whereas, in releases, it was enabled. With the Ceph Manager having FULL capabilities, the MDS no longer blocks Ceph Manager calls. This results in allowing the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full. Standalone Multicloud Object Gateway component deployment With this release, you can deploy OpenShift Data Foundation with only the Multicloud Object Gateway component in a standalone mode. In this mode, there is no CephCluster accompanying the StorageCluster, and hence Multicloud Object Gateway is not using a Ceph-based storage volume. Movement of Core and DB pods is enabled when a node fails OpenShift Container Platform does not mark the node as disconnected unless it is deleted. As a result, Core and DB pods, which are the statefulsets are not automatically evicted on such failed nodes. With this update, when a node fails, the DB and Core pods are evicted and moved to a new node. Volume snapshot restore to a different pool With this update, you can restore a volume snapshot of persistent volume claim (PVC) into a different pool than the parent volume. Previously, a volume snapshot could only be restored into the same pool. Multiple file systems are not created with existing pools With this update, after you create the filesystem.yaml , multiple file systems with the existing pool are not created even if you delete or recreate the filesystem.yaml . This avoids data loss. Auto-detection of Vault's Secret Key/Value store version With this enhancement, Vault's Secret Key/Value store version is auto-detected. Configuring VAULT_BACKEND parameter for HashiCorp Vault is now allowed With this update, you can configure the VAULT_BACKEND parameter for selecting the type of backend used by HashiCorp Vault. The autodetection of the backend used by HashiCorp Vault does not always work correctly. In case of a non-common configuration, the automatically detected configuration parameter might be set incorrectly. By allowing you to configure the VAULT_BACKEND parameter, non-common configurations can be forced to use a particular type of backend. Human-readable format for output of time in the Multicloud Object Gateway CLI With this release, the output of time in the Multicloud Object Gateway (MCG) CLI shows human-readable format (days-hours-minutes-seconds) instead of minutes and seconds.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/4.9_release_notes/enhancements
Chapter 4. Adding user preferences
Chapter 4. Adding user preferences You can change the default preferences for your profile to meet your requirements. You can set your default project, topology view (graph or list), editing medium (form or YAML), language preferences, and resource type. The changes made to the user preferences are automatically saved. 4.1. Setting user preferences You can set the default user preferences for your cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Use the masthead to access the user preferences under the user profile. In the General section: In the Theme field, you can set the theme that you want to work in. The console defaults to the selected theme each time you log in. In the Perspective field, you can set the default perspective you want to be logged in to. You can select the Administrator or the Developer perspective as required. If a perspective is not selected, you are logged into the perspective you last visited. In the Project field, select a project you want to work in. The console defaults to the project every time you log in. In the Topology field, you can set the topology view to default to the graph or list view. If not selected, the console defaults to the last view you used. In the Create/Edit resource method field, you can set a preference for creating or editing a resource. If both the form and YAML options are available, the console defaults to your selection. In the Language section, select Default browser language to use the default browser language settings. Otherwise, select the language that you want to use for the console. In the Notifications section, you can toggle display notifications created by users for specific projects on the Overview page or notification drawer. In the Applications section: You can view the default Resource type . For example, if the OpenShift Serverless Operator is installed, the default resource type is Serverless Deployment . Otherwise, the default resource type is Deployment . You can select another resource type to be the default resource type from the Resource Type field.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/web_console/adding-user-preferences
Chapter 6. Checking audit logs
Chapter 6. Checking audit logs You can use audit logs to identify pod security violations. 6.1. Identifying pod security violations through audit logs You can identify pod security admission violations on a workload by viewing the server audit logs. The following procedure shows you how to access the audit logs and parse them to find pod security admission violations in a workload. Prerequisites You have installed jq . You have access to the cluster as a user with the cluster-admin role. Procedure To retrieve the node name, run the following command: USD <node_name>=USD(oc get node -ojsonpath='{.items[0].metadata.name}') To view the audit logs, run the following command: USD oc adm node-logs <node_name> --path=kube-apiserver/ 1 1 Replace <node_name> with the name of the node retrieved from the step. Example output rhel-94.lab.local audit-2024-10-18T18-25-41.663.log rhel-94.lab.local audit-2024-10-19T11-21-29.225.log rhel-94.lab.local audit-2024-10-20T04-16-09.622.log rhel-94.lab.local audit-2024-10-20T21-11-41.163.log rhel-94.lab.local audit-2024-10-21T14-06-10.402.log rhel-94.lab.local audit-2024-10-22T06-35-10.392.log rhel-94.lab.local audit-2024-10-22T23-26-27.667.log rhel-94.lab.local audit-2024-10-23T16-52-15.456.log rhel-94.lab.local audit-2024-10-24T07-31-55.238.log To parse the affected audit logs, enter the following command: USD oc adm node-logs <node_name> --path=kube-apiserver/audit.log \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name + " " + .objectRef.resource' \ | sort | uniq -c 1 1 Replace <node_name> with the name of the node retrieved from the step.
[ "<node_name>=USD(oc get node -ojsonpath='{.items[0].metadata.name}')", "oc adm node-logs <node_name> --path=kube-apiserver/ 1", "rhel-94.lab.local audit-2024-10-18T18-25-41.663.log rhel-94.lab.local audit-2024-10-19T11-21-29.225.log rhel-94.lab.local audit-2024-10-20T04-16-09.622.log rhel-94.lab.local audit-2024-10-20T21-11-41.163.log rhel-94.lab.local audit-2024-10-21T14-06-10.402.log rhel-94.lab.local audit-2024-10-22T06-35-10.392.log rhel-94.lab.local audit-2024-10-22T23-26-27.667.log rhel-94.lab.local audit-2024-10-23T16-52-15.456.log rhel-94.lab.local audit-2024-10-24T07-31-55.238.log", "oc adm node-logs <node_name> --path=kube-apiserver/audit.log | jq -r 'select((.annotations[\"pod-security.kubernetes.io/audit-violations\"] != null) and (.objectRef.resource==\"pods\")) | .objectRef.namespace + \" \" + .objectRef.name + \" \" + .objectRef.resource' | sort | uniq -c 1" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/troubleshooting/microshift-audit-logs
13.3. Methods
13.3. Methods 13.3.1. Creating a New Storage Connection Creating a new storage connection requires a POST request. It is possible to create a new storage connection without adding a storage domain. The host id or name is optional; providing it will attempt a connection to the storage via the host. Example 13.2. Creating a New Storage Connection 13.3.2. Deleting a Storage Connection Deleting a storage connection requires a DELETE request. A storage connection can only be deleted if neither storage domain nor LUN disks reference it. The host name or id is optional; providing it unmounts the connection from that host. Example 13.3. Deleting Storage Connection 13.3.3. Updating a Storage Connection Updating an existing storage connection requires a PUT request. The storage domain must be in either maintenance mode or unattached to successfully update the connection. Providing the host name or id is optional; if provided, the host attempts a connection to the updated storage details. Example 13.4. Updating a Storage Connection 13.3.4. Updating an iSCSI Storage Connection Updating an existing iSCSI storage connection requires a PUT request. An iSCSI storage domain must be in maintenance mode or unattached to successfully update the connection. Example 13.5. Updating a Storage Connection 13.3.5. Adding New Storage Domain with Existing Storage Connection Adding a new storage domain with existing storage connection requires a POST request. This is only applicable with file-based storage domains: NFS , POSIX , and local . Example 13.6. Adding a New Storage Domain with Existing Storage Connection 13.3.6. Attaching an Additional Storage Connection to iSCSI Storage Attaching an additional storage connection to an iSCSI storage domain requires a POST request. Example 13.7. Attaching an Additional Storage Connection to iSCSI Storage 13.3.7. Detaching a Storage Connection from iSCSI Storage Detaching a storage connection from an iSCSI storage domain requires a DELETE request. Example 13.8. Detaching a Storage Connection from iSCSI Storage 13.3.8. Defining Credentials to an iSCSI Target When an iSCSI storage domain is added using the Administration Portal, only a single user name and password can be specified for that domain. However, some setups require that each host in the cluster use a separate user name and password. Specific credentials can be applied to each iSCSI target per host by using the storageconnectionextensions element. Example 13.9. Defining credentials to an iSCSI target
[ "POST /ovirt-engine/api/storageconnections HTTP/1.1 Accept: application/xml Content-type: application/xml <storage_connection> <type>nfs</type> <address>domain.example.com</address> <path>/export/storagedata/username/data</path> <host> <name>Host_Name</name> </host> </storage_connection>", "DELETE /ovirt-engine/api/storageconnections/ Storage_Connection_ID HTTP/1.1 Accept: application/xml Content-type: application/xml <host> <name>Host_Name</name> </host>", "PUT /ovirt-engine/api/storageconnections/ Storage_Connection_ID HTTP/1.1 Accept: application/xml Content-type: application/xml <storage_connection> <address>updated.example.domain.com</address> <host> <name>Host_name</name> </host> </storage_connection>", "PUT /ovirt-engine/api/storageconnections/ Storage_Connection_ID HTTP/1.1 Accept: application/xml Content-type: application/xml <storage_connection> <port> 3456 </port> </storage_connection>", "POST /ovirt-engine/api/storagedomains HTTP/1.1 Accept: application/xml Content-type: application/xml <storage_domain> <name>New_Domain</name> <type>data</type> <storage id=\" Storage_Connection_ID \"/> <host> <name>Host_Name</name> </host> </storage_domain>", "POST /ovirt-engine/api/storagedomains/ iSCSI_Domain_ID /storageconnections HTTP/1.1 Accept: application/xml Content-type: application/xml <storage_connection id=\" Storage_Connection_ID \"> </storage_connection>", "DELETE /ovirt-engine/api/storagedomains/ iSCSI_Domain_ID /storageconnections/ Storage_Connection_ID HTTP/1.1 Accept: application/xml Content-type: application/xml", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/storageconnectionextensions HTTP/1.1 Accept: application/xml Content-type: application/xml <storageconnectionextension> <target>iqn.2010.05.com.example:iscsi.targetX</target> <username>jimmy</username> <password>p@55w0Rd!</password> </storageconnectionextension>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-methods4
Chapter 228. MongoDB Component (deprecated)
Chapter 228. MongoDB Component (deprecated) Available as of Camel version 2.10 According to Wikipedia: "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases and ACID guarantees." NoSQL solutions have grown in popularity in the last few years, and major extremely-used sites and services such as Facebook, LinkedIn, Twitter, etc. are known to use them extensively to achieve scalability and agility. Basically, NoSQL solutions differ from traditional RDBMS (Relational Database Management Systems) in that they don't use SQL as their query language and generally don't offer ACID-like transactional behaviour nor relational data. Instead, they are designed around the concept of flexible data structures and schemas (meaning that the traditional concept of a database table with a fixed schema is dropped), extreme scalability on commodity hardware and blazing-fast processing. MongoDB is a very popular NoSQL solution and the camel-mongodb component integrates Camel with MongoDB allowing you to interact with MongoDB collections both as a producer (performing operations on the collection) and as a consumer (consuming documents from a MongoDB collection). MongoDB revolves around the concepts of documents (not as is office documents, but rather hierarchical data defined in JSON/BSON) and collections. This component page will assume you are familiar with them. Otherwise, visit http://www.mongodb.org/ . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mongodb</artifactId> <version>x.y.z</version> <!-- use the same version as your Camel core version --> </dependency> 228.1. URI format mongodb:connectionBean?database=databaseName&collection=collectionName&operation=operationName[&moreOptions...] 228.2. MongoDB options The MongoDB component has no options. The MongoDB endpoint is configured using URI syntax: with the following path and query parameters: 228.2.1. Path Parameters (1 parameters): Name Description Default Type connectionBean Required Name of com.mongodb.Mongo to use. String 228.2.2. Query Parameters (23 parameters): Name Description Default Type collection (common) Sets the name of the MongoDB collection to bind to this endpoint String collectionIndex (common) Sets the collection index (JSON FORMAT : field1 : order1, field2 : order2) String createCollection (common) Create collection during initialisation if it doesn't exist. Default is true. true boolean database (common) Sets the name of the MongoDB database to target String operation (common) Sets the operation this endpoint will execute against MongoDB. For possible values, see MongoDbOperation. MongoDbOperation outputType (common) Convert the output of the producer to the selected type : DBObjectList DBObject or DBCursor. DBObjectList or DBCursor applies to findAll and aggregate. DBObject applies to all other operations. MongoDbOutputType writeConcern (common) Set the WriteConcern for write operations on MongoDB using the standard ones. Resolved from the fields of the WriteConcern class by calling the WriteConcern#valueOf(String) method. ACKNOWLEDGED WriteConcern bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern cursorRegenerationDelay (advanced) MongoDB tailable cursors will block until new data arrives. If no new data is inserted, after some time the cursor will be automatically freed and closed by the MongoDB server. The client is expected to regenerate the cursor if needed. This value specifies the time to wait before attempting to fetch a new cursor, and if the attempt fails, how long before the attempt is made. Default value is 1000ms. 1000 long dynamicity (advanced) Sets whether this endpoint will attempt to dynamically resolve the target database and collection from the incoming Exchange properties. Can be used to override at runtime the database and collection specified on the otherwise static endpoint URI. It is disabled by default to boost performance. Enabling it will take a minimal performance hit. false boolean readPreference (advanced) Sets a MongoDB ReadPreference on the Mongo connection. Read preferences set directly on the connection will be overridden by this setting. The ReadPreference#valueOf(String) utility method is used to resolve the passed readPreference value. Some examples for the possible values are nearest, primary or secondary etc. ReadPreference synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean writeResultAsHeader (advanced) In write operations, it determines whether instead of returning WriteResult as the body of the OUT message, we transfer the IN message to the OUT and attach the WriteResult as a header. false boolean persistentId (tail) One tail tracking collection can host many trackers for several tailable consumers. To keep them separate, each tracker should have its own unique persistentId. String persistentTailTracking (tail) Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. false boolean persistRecords (tail) Sets the number of tailed records after which the tail tracking data is persisted to MongoDB. -1 int tailTrackCollection (tail) Collection where tail tracking information will be persisted. If not specified, MongoDbTailTrackingConfig#DEFAULT_COLLECTION will be used by default. String tailTrackDb (tail) Indicates what database the tail tracking mechanism will persist to. If not specified, the current database will be picked by default. Dynamicity will not be taken into account even if enabled, i.e. the tail tracking database will not vary past endpoint initialisation. String tailTrackField (tail) Field where the last tracked value will be placed. If not specified, MongoDbTailTrackingConfig#DEFAULT_FIELD will be used by default. String tailTrackIncreasingField (tail) Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. The cursor will be (re)created with a query of type: tailTrackIncreasingField lastValue (possibly recovered from persistent tail tracking). Can be of type Integer, Date, String, etc. NOTE: No support for dot notation at the current time, so the field should be at the top level of the document. String tailTrackingStrategy (tail) Sets the strategy used to extract the increasing field value and to create the query to position the tail cursor. LITERAL MongoDBTailTracking Enum 228.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.mongodb.enabled Enable mongodb component true Boolean camel.component.mongodb.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 228.4. Configuration of database in Spring XML The following Spring XML creates a bean defining the connection to a MongoDB instance. <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="mongoBean" class="com.mongodb.Mongo"> <constructor-arg name="host" value="USD{mongodb.host}" /> <constructor-arg name="port" value="USD{mongodb.port}" /> </bean> </beans> 228.5. Sample route The following route defined in Spring XML executes the operation dbStats on a collection. Get DB stats for specified collection <route> <from uri="direct:start" /> <!-- using bean 'mongoBean' defined above --> <to uri="mongodb:mongoBean?database=USD{mongodb.database}&amp;collection=USD{mongodb.collection}&amp;operation=getDbStats" /> <to uri="direct:result" /> </route> 228.6. MongoDB operations - producer endpoints 228.6.1. Query operations 228.6.1.1. findById This operation retrieves only one element from the collection whose _id field matches the content of the IN message body. The incoming object can be anything that has an equivalent to a BSON type. See http://bsonspec.org/ /specification[ http://bsonspec.org/ /specification] and http://www.mongodb.org/display/DOCS/Java+Types . from("direct:findById") .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById"); Tip Supports optional parameters . This operation supports specifying a fields filter. See Specifying optional parameters . 228.6.1.2. findOneByQuery Use this operation to retrieve just one element from the collection that matches a MongoDB query. The query object is extracted from the IN message body , i.e. it should be of type DBObject or convertible to DBObject . It can be a JSON String or a Hashmap. See #Type conversions for more info. Example with no query (returns any object of the collection): from("direct:findOneByQuery") .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery"); Example with a query (returns one matching result): from("direct:findOneByQuery") .setBody().constant("{ \"name\": \"Raul Kripalani\" }") .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery"); Tip Supports optional parameters . This operation supports specifying a fields filter and/or a sort clause. See Specifying optional parameters . 228.6.1.3. findAll The findAll operation returns all documents matching a query, or none at all, in which case all documents contained in the collection are returned. The query object is extracted from the IN message body , i.e. it should be of type DBObject or convertible to DBObject . It can be a JSON String or a Hashmap. See #Type conversions for more info. Example with no query (returns all object in the collection): from("direct:findAll") .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll"); Example with a query (returns all matching results): from("direct:findAll") .setBody().constant("{ \"name\": \"Raul Kripalani\" }") .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll"); Paging and efficient retrieval is supported via the following headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbNumToSkip MongoDbConstants.NUM_TO_SKIP Discards a given number of elements at the beginning of the cursor. int/Integer CamelMongoDbLimit MongoDbConstants.LIMIT Limits the number of elements returned. int/Integer CamelMongoDbBatchSize MongoDbConstants.BATCH_SIZE Limits the number of elements returned in one batch. A cursor typically fetches a batch of result objects and store them locally. If batchSize is positive, it represents the size of each batch of objects retrieved. It can be adjusted to optimize performance and limit data transfer. If batchSize is negative, it will limit of number objects returned, that fit within the max batch size limit (usually 4MB), and cursor will be closed. For example if batchSize is -10, then the server will return a maximum of 10 documents and as many as can fit in 4MB, then close the cursor. Note that this feature is different from limit() in that documents must fit within a maximum size, and it removes the need to send a request to close the cursor server-side. The batch size can be changed even after a cursor is iterated, in which case the setting will apply on the batch retrieval. int/Integer You can also "stream" the documents returned from the server into your route by including outputType=DBCursor (Camel 2.16+) as an endpoint option which may prove simpler than setting the above headers. This hands your Exchange the DBCursor from the Mongo driver, just as if you were executing the findAll() within the Mongo shell, allowing your route to iterate over the results. By default and without this option, this component will load the documents from the driver's cursor into a List and return this to your route - which may result in a large number of in-memory objects. Remember, with a DBCursor do not ask for the number of documents matched - see the MongoDB documentation site for details. Example with option outputType=DBCursor and batch size : from("direct:findAll") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant("{ \"name\": \"Raul Kripalani\" }") .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=DBCursor") .to("mock:resultFindAll"); The findAll operation will also return the following OUT headers to enable you to iterate through result pages if you are using paging: Header key Quick constant Description (extracted from MongoDB API doc) Data type CamelMongoDbResultTotalSize MongoDbConstants.RESULT_TOTAL_SIZE Number of objects matching the query. This does not take limit/skip into consideration. int/Integer CamelMongoDbResultPageSize MongoDbConstants.RESULT_PAGE_SIZE Number of objects matching the query. This does not take limit/skip into consideration. int/Integer Tip Supports optional parameters . This operation supports specifying a fields filter and/or a sort clause. See Specifying optional parameters . 228.6.1.4. count Returns the total number of objects in a collection, returning a Long as the OUT message body. The following example will count the number of records in the "dynamicCollectionName" collection. Notice how dynamicity is enabled, and as a result, the operation will not run against the "notableScientists" collection, but against the "dynamicCollectionName" collection. // from("direct:count").to("mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true"); Long result = template.requestBodyAndHeader("direct:count", "irrelevantBody", MongoDbConstants.COLLECTION, "dynamicCollectionName"); assertTrue("Result is not of type Long", result instanceof Long); From Camel 2.14 onwards you can provide a com.mongodb.DBObject object in the message body as a query, and operation will return the amount of documents matching this criteria. DBObject query = ... Long count = template.requestBodyAndHeader("direct:count", query, MongoDbConstants.COLLECTION, "dynamicCollectionName"); 228.6.1.5. Specifying a fields filter (projection) Query operations will, by default, return the matching objects in their entirety (with all their fields). If your documents are large and you only require retrieving a subset of their fields, you can specify a field filter in all query operations, simply by setting the relevant DBObject (or type convertible to DBObject , such as a JSON String, Map, etc.) on the CamelMongoDbFieldsFilter header, constant shortcut: MongoDbConstants.FIELDS_FILTER . Here is an example that uses MongoDB's BasicDBObjectBuilder to simplify the creation of DBObjects. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") DBObject fieldFilter = BasicDBObjectBuilder.start().add("_id", 0).add("boringField", 0).get(); Object result = template.requestBodyAndHeader("direct:findAll", (Object) null, MongoDbConstants.FIELDS_FILTER, fieldFilter); 228.6.1.6. Specifying a sort clause There is a often a requirement to fetch the min/max record from a collection based on sorting by a particular field. In Mongo the operation is performed using syntax similar to: In a Camel route the SORT_BY header can be used with the findOneByQuery operation to achieve the same result. If the FIELDS_FILTER header is also specified the operation will return a single field/value pair that can be passed directly to another component (for example, a parameterized MyBatis SELECT query). This example demonstrates fetching the temporally newest document from a collection and reducing the result to a single field, based on the documentTimestamp field: .from("direct:someTriggeringEvent") .setHeader(MongoDbConstants.SORT_BY).constant("{\"documentTimestamp\": -1}") .setHeader(MongoDbConstants.FIELDS_FILTER).constant("{\"documentTimestamp\": 1}") .setBody().constant("{}") .to("mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery") .to("direct:aMyBatisParameterizedSelect") ; 228.6.2. Create/update operations 228.6.2.1. insert Inserts an new object into the MongoDB collection, taken from the IN message body. Type conversion is attempted to turn it into DBObject or a List . Two modes are supported: single insert and multiple insert. For multiple insert, the endpoint will expect a List, Array or Collections of objects of any type, as long as they are - or can be converted to - DBObject . All objects are inserted at once. The endpoint will intelligently decide which backend operation to invoke (single or multiple insert) depending on the input. Example: from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); The operation will return a WriteResult, and depending on the WriteConcern or the value of the invokeGetLastError option, getLastError() would have been called already or not. If you want to access the ultimate result of the write operation, you need to retrieve the CommandResult by calling getLastError() or getCachedLastError() on the WriteResult . Then you can verify the result by calling CommandResult.ok() , CommandResult.getErrorMessage() and/or CommandResult.getException() . Note that the new object's _id must be unique in the collection. If you don't specify the value, MongoDB will automatically generate one for you. But if you do specify it and it is not unique, the insert operation will fail (and for Camel to notice, you will need to enable invokeGetLastError or set a WriteConcern that waits for the write result). This is not a limitation of the component, but it is how things work in MongoDB for higher throughput. If you are using a custom _id , you are expected to ensure at the application level that is unique (and this is a good practice too). Since Camel 2.15 : OID(s) of the inserted record(s) is stored in the message header under CamelMongoOid key ( MongoDbConstants.OID constant). The value stored is org.bson.types.ObjectId for single insert or java.util.List<org.bson.types.ObjectId> if multiple records have been inserted. 228.6.2.2. save The save operation is equivalent to an upsert (UPdate, inSERT) operation, where the record will be updated, and if it doesn't exist, it will be inserted, all in one atomic operation. MongoDB will perform the matching based on the _id field. Beware that in case of an update, the object is replaced entirely and the usage of MongoDB's USDmodifiers is not permitted. Therefore, if you want to manipulate the object if it already exists, you have two options: perform a query to retrieve the entire object first along with all its fields (may not be efficient), alter it inside Camel and then save it. use the update operation with USDmodifiers , which will execute the update at the server-side instead. You can enable the upsert flag, in which case if an insert is required, MongoDB will apply the USDmodifiers to the filter query object and insert the result. For example: from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=save"); 228.6.2.3. update Update one or multiple records on the collection. Requires a List<DBObject> as the IN message body containing exactly 2 elements: Element 1 (index 0) ⇒ filter query ⇒ determines what objects will be affected, same as a typical query object Element 2 (index 1) ⇒ update rules ⇒ how matched objects will be updated. All modifier operations from MongoDB are supported. Note Multiupdates . By default, MongoDB will only update 1 object even if multiple objects match the filter query. To instruct MongoDB to update all matching records, set the CamelMongoDbMultiUpdate IN message header to true . A header with key CamelMongoDbRecordsAffected will be returned ( MongoDbConstants.RECORDS_AFFECTED constant) with the number of records updated (copied from WriteResult.getN() ). Supports the following IN message headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbMultiUpdate MongoDbConstants.MULTIUPDATE If the update should be applied to all objects matching. See http://www.mongodb.org/display/DOCS/Atomic+Operations boolean/Boolean CamelMongoDbUpsert MongoDbConstants.UPSERT If the database should create the element if it does not exist boolean/Boolean For example, the following will update all records whose filterField field equals true by setting the value of the "scientist" field to "Darwin": // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); DBObject filterField = new BasicDBObject("filterField", true); DBObject updateObj = new BasicDBObject("USDset", new BasicDBObject("scientist", "Darwin")); Object result = template.requestBodyAndHeader("direct:update", new Object[] {filterField, updateObj}, MongoDbConstants.MULTIUPDATE, true); 228.6.3. Delete operations 228.6.3.1. remove Remove matching records from the collection. The IN message body will act as the removal filter query, and is expected to be of type DBObject or a type convertible to it. The following example will remove all objects whose field 'conditionField' equals true, in the science database, notableScientists collection: // route: from("direct:remove").to("mongodb:myDb?database=science&collection=notableScientists&operation=remove"); DBObject conditionField = new BasicDBObject("conditionField", true); Object result = template.requestBody("direct:remove", conditionField); A header with key CamelMongoDbRecordsAffected is returned ( MongoDbConstants.RECORDS_AFFECTED constant) with type int , containing the number of records deleted (copied from WriteResult.getN() ). 228.6.4. Bulk Write Operations 228.6.4.1. bulkWrite Available as of Camel 2.21 Performs write operations in bulk with controls for order of execution. Requires a List<WriteModel<DBObject>> as the IN message body containing commands for insert, update, and delete operations. The following example will insert a new scientist "Pierre Curie", update record with id "5" by setting the value of the "scientist" field to "Marie Curie" and delete record with id "3" : // route: from("direct:bulkWrite").to("mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite"); List<WriteModel<DBObject>> bulkOperations = Arrays.asList( new InsertOneModel<>(new BasicDBObject("scientist", "Pierre Curie")), new UpdateOneModel<>(new BasicDBObject("_id", "5"), new BasicDBObject("USDset", new BasicDBObject("scientist", "Marie Curie"))), new DeleteOneModel<>(new BasicDBObject("_id", "3"))); BulkWriteResult result = template.requestBody("direct:bulkWrite", bulkOperations, BulkWriteResult.class); By default, operations are executed in order and interrupted on the first write error without processing any remaining write operations in the list. To instruct MongoDB to continue to process remaining write operations in the list, set the CamelMongoDbBulkOrdered IN message header to false . Unordered operations are executed in parallel and this behavior is not guaranteed. Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbBulkOrdered MongoDbConstants.BULK_ORDERED Perform an ordered or unordered operation execution. Defaults to true. boolean/Boolean 228.6.5. Other operations 228.6.5.1. aggregate Available as of Camel 2.14 Perform a aggregation with the given pipeline contained in the body. Aggregations could be long and heavy operations. Use with care. // route: from("direct:aggregate").to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate"); from("direct:aggregate") .setBody().constant("[{ USDmatch : {USDor : [{\"scientist\" : \"Darwin\"},{\"scientist\" : \"Einstein\"}]}},{ USDgroup: { _id: \"USDscientist\", count: { USDsum: 1 }} } ]") .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate") .to("mock:resultAggregate"); Supports the following IN message headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbBatchSize MongoDbConstants.BATCH_SIZE Sets the number of documents to return per batch. int/Integer CamelMongoDbAllowDiskUse MongoDbConstants.ALLOW_DISK_USE Enable aggregation pipeline stages to write data to temporary files. boolean/Boolean Efficient retrieval is supported via outputType=DBCursor. You can also "stream" the documents returned from the server into your route by including outputType=DBCursor (Camel 2.21+) as an endpoint option which may prove simpler than setting the above headers. This hands your Exchange the DBCursor from the Mongo driver, just as if you were executing the aggregate() within the Mongo shell, allowing your route to iterate over the results. By default and without this option, this component will load the documents from the driver's cursor into a List and return this to your route - which may result in a large number of in-memory objects. Remember, with a DBCursor do not ask for the number of documents matched - see the MongoDB documentation site for details. Example with option outputType=DBCursor and batch size: // route: from("direct:aggregate").to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate"); from("direct:aggregate") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant("[{ USDmatch : {USDor : [{\"scientist\" : \"Darwin\"},{\"scientist\" : \"Einstein\"}]}},{ USDgroup: { _id: \"USDscientist\", count: { USDsum: 1 }} } ]") .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=DBCursor") .to("mock:resultAggregate"); 228.6.5.2. getDbStats Equivalent of running the db.stats() command in the MongoDB shell, which displays useful statistic figures about the database. For example: > db.stats(); { "db" : "test", "collections" : 7, "objects" : 719, "avgObjSize" : 59.73296244784423, "dataSize" : 42948, "storageSize" : 1000058880, "numExtents" : 9, "indexes" : 4, "indexSize" : 32704, "fileSize" : 1275068416, "nsSizeMB" : 16, "ok" : 1 } Usage example: // from("direct:getDbStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getDbStats"); Object result = template.requestBody("direct:getDbStats", "irrelevantBody"); assertTrue("Result is not of type DBObject", result instanceof DBObject); The operation will return a data structure similar to the one displayed in the shell, in the form of a DBObject in the OUT message body. 228.6.5.3. getColStats Equivalent of running the db.collection.stats() command in the MongoDB shell, which displays useful statistic figures about the collection. For example: > db.camelTest.stats(); { "ns" : "test.camelTest", "count" : 100, "size" : 5792, "avgObjSize" : 57.92, "storageSize" : 20480, "numExtents" : 2, "nindexes" : 1, "lastExtentSize" : 16384, "paddingFactor" : 1, "flags" : 1, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1 } Usage example: // from("direct:getColStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getColStats"); Object result = template.requestBody("direct:getColStats", "irrelevantBody"); assertTrue("Result is not of type DBObject", result instanceof DBObject); The operation will return a data structure similar to the one displayed in the shell, in the form of a DBObject in the OUT message body. 228.6.5.4. command Available as of Camel 2.15 Run the body as a command on database. Usefull for admin operation as getting host informations, replication or sharding status. Collection parameter is not use for this operation. // route: from("command").to("mongodb:myDb?database=science&operation=command"); DBObject commandBody = new BasicDBObject("hostInfo", "1"); Object result = template.requestBody("direct:command", commandBody); 228.6.6. Dynamic operations An Exchange can override the endpoint's fixed operation by setting the CamelMongoDbOperation header, defined by the MongoDbConstants.OPERATION_HEADER constant. The values supported are determined by the MongoDbOperation enumeration and match the accepted values for the operation parameter on the endpoint URI. For example: // from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); Object result = template.requestBodyAndHeader("direct:insert", "irrelevantBody", MongoDbConstants.OPERATION_HEADER, "count"); assertTrue("Result is not of type Long", result instanceof Long); 228.7. Tailable Cursor Consumer MongoDB offers a mechanism to instantaneously consume ongoing data from a collection, by keeping the cursor open just like the tail -f command of *nix systems. This mechanism is significantly more efficient than a scheduled poll, due to the fact that the server pushes new data to the client as it becomes available, rather than making the client ping back at scheduled intervals to fetch new data. It also reduces otherwise redundant network traffic. There is only one requisite to use tailable cursors: the collection must be a "capped collection", meaning that it will only hold N objects, and when the limit is reached, MongoDB flushes old objects in the same order they were originally inserted. For more information, please refer to: http://www.mongodb.org/display/DOCS/Tailable+Cursors . The Camel MongoDB component implements a tailable cursor consumer, making this feature available for you to use in your Camel routes. As new objects are inserted, MongoDB will push them as DBObjects in natural order to your tailable cursor consumer, who will transform them to an Exchange and will trigger your route logic. 228.8. How the tailable cursor consumer works To turn a cursor into a tailable cursor, a few special flags are to be signalled to MongoDB when first generating the cursor. Once created, the cursor will then stay open and will block upon calling the DBCursor.() method until new data arrives. However, the MongoDB server reserves itself the right to kill your cursor if new data doesn't appear after an indeterminate period. If you are interested to continue consuming new data, you have to regenerate the cursor. And to do so, you will have to remember the position where you left off or else you will start consuming from the top again. The Camel MongoDB tailable cursor consumer takes care of all these tasks for you. You will just need to provide the key to some field in your data of increasing nature, which will act as a marker to position your cursor every time it is regenerated, e.g. a timestamp, a sequential ID, etc. It can be of any datatype supported by MongoDB. Date, Strings and Integers are found to work well. We call this mechanism "tail tracking" in the context of this component. The consumer will remember the last value of this field and whenever the cursor is to be regenerated, it will run the query with a filter like: increasingField > lastValue , so that only unread data is consumed. Setting the increasing field: Set the key of the increasing field on the endpoint URI tailTrackingIncreasingField option. In Camel 2.10, it must be a top-level field in your data, as nested navigation for this field is not yet supported. That is, the "timestamp" field is okay, but "nested.timestamp" will not work. Please open a ticket in the Camel JIRA if you do require support for nested increasing fields. Cursor regeneration delay: One thing to note is that if new data is not already available upon initialisation, MongoDB will kill the cursor instantly. Since we don't want to overwhelm the server in this case, a cursorRegenerationDelay option has been introduced (with a default value of 1000ms.), which you can modify to suit your needs. An example: from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime") .id("tailableCursorConsumer1") .autoStartup(false) .to("mock:test"); The above route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms. 228.9. Persistent tail tracking Standard tail tracking is volatile and the last value is only kept in memory. However, in practice you will need to restart your Camel container every now and then, but your last value would then be lost and your tailable cursor consumer would start consuming from the top again, very likely sending duplicate records into your route. To overcome this situation, you can enable the persistent tail tracking feature to keep track of the last consumed increasing value in a special collection inside your MongoDB database too. When the consumer initialises again, it will restore the last tracked value and continue as if nothing happened. The last read value is persisted on two occasions: every time the cursor is regenerated and when the consumer shuts down. We may consider persisting at regular intervals too in the future (flush every 5 seconds) for added robustness if the demand is there. To request this feature, please open a ticket in the Camel JIRA. 228.10. Enabling persistent tail tracking To enable this function, set at least the following options on the endpoint URI: persistentTailTracking option to true persistentId option to a unique identifier for this consumer, so that the same collection can be reused across many consumers Additionally, you can set the tailTrackDb , tailTrackCollection and tailTrackField options to customise where the runtime information will be stored. Refer to the endpoint options table at the top of this page for descriptions of each option. For example, the following route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms, with persistent tail tracking turned on, and persisting under the "cancellationsTracker" id on the "flights.camelTailTracking", storing the last processed value under the "lastTrackingValue" field ( camelTailTracking and lastTrackingValue are defaults). from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker") .id("tailableCursorConsumer2") .autoStartup(false) .to("mock:test"); Below is another example identical to the one above, but where the persistent tail tracking runtime information will be stored under the "trackers.camelTrackers" collection, in the "lastProcessedDepartureTime" field: from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers" + "&tailTrackField=lastProcessedDepartureTime") .id("tailableCursorConsumer3") .autoStartup(false) .to("mock:test"); 228.11. Oplog Tail Tracking The oplog collection tracking feature allows to implement trigger like functionality in MongoDB. In order to activate this collection you will have first to activate a replica set. For more information on this topic please check https://docs.mongodb.com/manual/tutorial/deploy-replica-set/ . Below you can find an example of a Java DSL based route demonstrating how you can use the component to track the oplog collection. In this specific case we are filtering the events which affect a collection customers in database optlog_test . Note that the tailTrackIncreasingField is a timestamp field ('ts') which implies that you have to use the tailTrackingStrategy parameter with the TIMESTAMP value. import com.mongodb.BasicDBObject; import com.mongodb.MongoClient; import org.apache.camel.Exchange; import org.apache.camel.Message; import org.apache.camel.Processor; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.mongodb.MongoDBTailTrackingEnum; import org.apache.camel.main.Main; import java.io.InputStream; /** * For this to work you need to turn on the replica set * <p> * Commands to create a replica set: * <p> * rs.initiate( { * _id : "rs0", * members: [ { _id : 0, host : "localhost:27017" } ] * }) */ public class MongoDbTracker { private final String database; private final String collection; private final String increasingField; private MongoDBTailTrackingEnum trackingStrategy; private int persistRecords = -1; private boolean persistenTailTracking; public MongoDbTracker(String database, String collection, String increasingField) { this.database = database; this.collection = collection; this.increasingField = increasingField; } public static void main(String[] args) throws Exception { final MongoDbTracker mongoDbTracker = new MongoDbTracker("local", "oplog.rs", "ts"); mongoDbTracker.setTrackingStrategy(MongoDBTailTrackingEnum.TIMESTAMP); mongoDbTracker.setPersistRecords(5); mongoDbTracker.setPersistenTailTracking(true); mongoDbTracker.startRouter(); // run until you terminate the JVM System.out.println("Starting Camel. Use ctrl + c to terminate the JVM.\n"); } public void setTrackingStrategy(MongoDBTailTrackingEnum trackingStrategy) { this.trackingStrategy = trackingStrategy; } public void setPersistRecords(int persistRecords) { this.persistRecords = persistRecords; } public void setPersistenTailTracking(boolean persistenTailTracking) { this.persistenTailTracking = persistenTailTracking; } void startRouter() throws Exception { // create a Main instance Main main = new Main(); main.bind(MongoConstants.CONN_NAME, new MongoClient("localhost", 27017)); main.addRouteBuilder(new RouteBuilder() { @Override public void configure() throws Exception { getContext().getTypeConverterRegistry().addTypeConverter(InputStream.class, BasicDBObject.class, new MongoToInputStreamConverter()); from("mongodb://" + MongoConstants.CONN_NAME + "?database=" + database + "&collection=" + collection + "&persistentTailTracking=" + persistenTailTracking + "&persistentId=trackerName" + "&tailTrackDb=local" + "&tailTrackCollection=talendTailTracking" + "&tailTrackField=lastTrackingValue" + "&tailTrackIncreasingField=" + increasingField + "&tailTrackingStrategy=" + trackingStrategy.toString() + "&persistRecords=" + persistRecords + "&cursorRegenerationDelay=1000") .filter().jsonpath("USD[?(@.ns=='optlog_test.customers')]") .id("logger") .to("log:logger?level=WARN") .process(new Processor() { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); System.out.println(message.getBody().toString()); exchange.getOut().setBody(message.getBody().toString()); } }); } }); main.run(); } } 228.12. Type conversions The MongoDbBasicConverters type converter included with the camel-mongodb component provides the following conversions: Name From type To type How? fromMapToDBObject Map DBObject constructs a new BasicDBObject via the new BasicDBObject(Map m) constructor fromBasicDBObjectToMap BasicDBObject Map BasicDBObject already implements Map fromStringToDBObject String DBObject uses com.mongodb.util.JSON.parse(String s) fromAnyObjectToDBObject Object DBObject uses the Jackson library to convert the object to a Map , which is in turn used to initialise a new BasicDBObject This type converter is auto-discovered, so you don't need to configure anything manually. 228.13. See also MongoDB website NoSQL Wikipedia article MongoDB Java driver API docs - current version * Unit tests for more examples of usage
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mongodb</artifactId> <version>x.y.z</version> <!-- use the same version as your Camel core version --> </dependency>", "mongodb:connectionBean?database=databaseName&collection=collectionName&operation=operationName[&moreOptions...]", "mongodb:connectionBean", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd\"> <bean id=\"mongoBean\" class=\"com.mongodb.Mongo\"> <constructor-arg name=\"host\" value=\"USD{mongodb.host}\" /> <constructor-arg name=\"port\" value=\"USD{mongodb.port}\" /> </bean> </beans>", "<route> <from uri=\"direct:start\" /> <!-- using bean 'mongoBean' defined above --> <to uri=\"mongodb:mongoBean?database=USD{mongodb.database}&amp;collection=USD{mongodb.collection}&amp;operation=getDbStats\" /> <to uri=\"direct:result\" /> </route>", "from(\"direct:findById\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findById\") .to(\"mock:resultFindById\");", "from(\"direct:findOneByQuery\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery\") .to(\"mock:resultFindOneByQuery\");", "from(\"direct:findOneByQuery\") .setBody().constant(\"{ \\\"name\\\": \\\"Raul Kripalani\\\" }\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery\") .to(\"mock:resultFindOneByQuery\");", "from(\"direct:findAll\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") .to(\"mock:resultFindAll\");", "from(\"direct:findAll\") .setBody().constant(\"{ \\\"name\\\": \\\"Raul Kripalani\\\" }\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") .to(\"mock:resultFindAll\");", "from(\"direct:findAll\") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(\"{ \\\"name\\\": \\\"Raul Kripalani\\\" }\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=DBCursor\") .to(\"mock:resultFindAll\");", "// from(\"direct:count\").to(\"mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true\"); Long result = template.requestBodyAndHeader(\"direct:count\", \"irrelevantBody\", MongoDbConstants.COLLECTION, \"dynamicCollectionName\"); assertTrue(\"Result is not of type Long\", result instanceof Long);", "DBObject query = Long count = template.requestBodyAndHeader(\"direct:count\", query, MongoDbConstants.COLLECTION, \"dynamicCollectionName\");", "// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") DBObject fieldFilter = BasicDBObjectBuilder.start().add(\"_id\", 0).add(\"boringField\", 0).get(); Object result = template.requestBodyAndHeader(\"direct:findAll\", (Object) null, MongoDbConstants.FIELDS_FILTER, fieldFilter);", "db.collection.find().sort({_id: -1}).limit(1) // or db.collection.findOne({USDquery:{},USDorderby:{_id:-1}})", ".from(\"direct:someTriggeringEvent\") .setHeader(MongoDbConstants.SORT_BY).constant(\"{\\\"documentTimestamp\\\": -1}\") .setHeader(MongoDbConstants.FIELDS_FILTER).constant(\"{\\\"documentTimestamp\\\": 1}\") .setBody().constant(\"{}\") .to(\"mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery\") .to(\"direct:aMyBatisParameterizedSelect\") ;", "from(\"direct:insert\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=insert\");", "from(\"direct:insert\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=save\");", "// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); DBObject filterField = new BasicDBObject(\"filterField\", true); DBObject updateObj = new BasicDBObject(\"USDset\", new BasicDBObject(\"scientist\", \"Darwin\")); Object result = template.requestBodyAndHeader(\"direct:update\", new Object[] {filterField, updateObj}, MongoDbConstants.MULTIUPDATE, true);", "// route: from(\"direct:remove\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=remove\"); DBObject conditionField = new BasicDBObject(\"conditionField\", true); Object result = template.requestBody(\"direct:remove\", conditionField);", "// route: from(\"direct:bulkWrite\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite\"); List<WriteModel<DBObject>> bulkOperations = Arrays.asList( new InsertOneModel<>(new BasicDBObject(\"scientist\", \"Pierre Curie\")), new UpdateOneModel<>(new BasicDBObject(\"_id\", \"5\"), new BasicDBObject(\"USDset\", new BasicDBObject(\"scientist\", \"Marie Curie\"))), new DeleteOneModel<>(new BasicDBObject(\"_id\", \"3\"))); BulkWriteResult result = template.requestBody(\"direct:bulkWrite\", bulkOperations, BulkWriteResult.class);", "// route: from(\"direct:aggregate\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\"); from(\"direct:aggregate\") .setBody().constant(\"[{ USDmatch : {USDor : [{\\\"scientist\\\" : \\\"Darwin\\\"},{\\\"scientist\\\" : \\\"Einstein\\\"}]}},{ USDgroup: { _id: \\\"USDscientist\\\", count: { USDsum: 1 }} } ]\") .to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\") .to(\"mock:resultAggregate\");", "// route: from(\"direct:aggregate\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\"); from(\"direct:aggregate\") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(\"[{ USDmatch : {USDor : [{\\\"scientist\\\" : \\\"Darwin\\\"},{\\\"scientist\\\" : \\\"Einstein\\\"}]}},{ USDgroup: { _id: \\\"USDscientist\\\", count: { USDsum: 1 }} } ]\") .to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=DBCursor\") .to(\"mock:resultAggregate\");", "> db.stats(); { \"db\" : \"test\", \"collections\" : 7, \"objects\" : 719, \"avgObjSize\" : 59.73296244784423, \"dataSize\" : 42948, \"storageSize\" : 1000058880, \"numExtents\" : 9, \"indexes\" : 4, \"indexSize\" : 32704, \"fileSize\" : 1275068416, \"nsSizeMB\" : 16, \"ok\" : 1 }", "// from(\"direct:getDbStats\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=getDbStats\"); Object result = template.requestBody(\"direct:getDbStats\", \"irrelevantBody\"); assertTrue(\"Result is not of type DBObject\", result instanceof DBObject);", "> db.camelTest.stats(); { \"ns\" : \"test.camelTest\", \"count\" : 100, \"size\" : 5792, \"avgObjSize\" : 57.92, \"storageSize\" : 20480, \"numExtents\" : 2, \"nindexes\" : 1, \"lastExtentSize\" : 16384, \"paddingFactor\" : 1, \"flags\" : 1, \"totalIndexSize\" : 8176, \"indexSizes\" : { \"_id_\" : 8176 }, \"ok\" : 1 }", "// from(\"direct:getColStats\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=getColStats\"); Object result = template.requestBody(\"direct:getColStats\", \"irrelevantBody\"); assertTrue(\"Result is not of type DBObject\", result instanceof DBObject);", "// route: from(\"command\").to(\"mongodb:myDb?database=science&operation=command\"); DBObject commandBody = new BasicDBObject(\"hostInfo\", \"1\"); Object result = template.requestBody(\"direct:command\", commandBody);", "// from(\"direct:insert\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=insert\"); Object result = template.requestBodyAndHeader(\"direct:insert\", \"irrelevantBody\", MongoDbConstants.OPERATION_HEADER, \"count\"); assertTrue(\"Result is not of type Long\", result instanceof Long);", "from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime\") .id(\"tailableCursorConsumer1\") .autoStartup(false) .to(\"mock:test\");", "from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true\" + \"&persistentId=cancellationsTracker\") .id(\"tailableCursorConsumer2\") .autoStartup(false) .to(\"mock:test\");", "from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true\" + \"&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers\" + \"&tailTrackField=lastProcessedDepartureTime\") .id(\"tailableCursorConsumer3\") .autoStartup(false) .to(\"mock:test\");", "import com.mongodb.BasicDBObject; import com.mongodb.MongoClient; import org.apache.camel.Exchange; import org.apache.camel.Message; import org.apache.camel.Processor; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.mongodb.MongoDBTailTrackingEnum; import org.apache.camel.main.Main; import java.io.InputStream; /** * For this to work you need to turn on the replica set * <p> * Commands to create a replica set: * <p> * rs.initiate( { * _id : \"rs0\", * members: [ { _id : 0, host : \"localhost:27017\" } ] * }) */ public class MongoDbTracker { private final String database; private final String collection; private final String increasingField; private MongoDBTailTrackingEnum trackingStrategy; private int persistRecords = -1; private boolean persistenTailTracking; public MongoDbTracker(String database, String collection, String increasingField) { this.database = database; this.collection = collection; this.increasingField = increasingField; } public static void main(String[] args) throws Exception { final MongoDbTracker mongoDbTracker = new MongoDbTracker(\"local\", \"oplog.rs\", \"ts\"); mongoDbTracker.setTrackingStrategy(MongoDBTailTrackingEnum.TIMESTAMP); mongoDbTracker.setPersistRecords(5); mongoDbTracker.setPersistenTailTracking(true); mongoDbTracker.startRouter(); // run until you terminate the JVM System.out.println(\"Starting Camel. Use ctrl + c to terminate the JVM.\\n\"); } public void setTrackingStrategy(MongoDBTailTrackingEnum trackingStrategy) { this.trackingStrategy = trackingStrategy; } public void setPersistRecords(int persistRecords) { this.persistRecords = persistRecords; } public void setPersistenTailTracking(boolean persistenTailTracking) { this.persistenTailTracking = persistenTailTracking; } void startRouter() throws Exception { // create a Main instance Main main = new Main(); main.bind(MongoConstants.CONN_NAME, new MongoClient(\"localhost\", 27017)); main.addRouteBuilder(new RouteBuilder() { @Override public void configure() throws Exception { getContext().getTypeConverterRegistry().addTypeConverter(InputStream.class, BasicDBObject.class, new MongoToInputStreamConverter()); from(\"mongodb://\" + MongoConstants.CONN_NAME + \"?database=\" + database + \"&collection=\" + collection + \"&persistentTailTracking=\" + persistenTailTracking + \"&persistentId=trackerName\" + \"&tailTrackDb=local\" + \"&tailTrackCollection=talendTailTracking\" + \"&tailTrackField=lastTrackingValue\" + \"&tailTrackIncreasingField=\" + increasingField + \"&tailTrackingStrategy=\" + trackingStrategy.toString() + \"&persistRecords=\" + persistRecords + \"&cursorRegenerationDelay=1000\") .filter().jsonpath(\"USD[?(@.ns=='optlog_test.customers')]\") .id(\"logger\") .to(\"log:logger?level=WARN\") .process(new Processor() { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); System.out.println(message.getBody().toString()); exchange.getOut().setBody(message.getBody().toString()); } }); } }); main.run(); } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/mongodb-component
Chapter 1. Distributed tracing release notes
Chapter 1. Distributed tracing release notes 1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With distributed tracing you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis Red Hat OpenShift distributed tracing consists of two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Both of these components are based on the vendor-neutral OpenTracing APIs and instrumentation. 1.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.3. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.4. New features and enhancements This release adds improvements related to the following components and concepts. 1.4.1. New features and enhancements Red Hat OpenShift distributed tracing 2.7 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.1.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.7 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.39 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.63.1 1.4.2. New features and enhancements Red Hat OpenShift distributed tracing 2.6 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.2.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.6 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.38 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.60 1.4.3. New features and enhancements Red Hat OpenShift distributed tracing 2.5 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the Red Hat OpenShift distributed tracing platform Operator. The Operator now automatically enables the OTLP ports: Port 4317 is used for OTLP gRPC protocol. Port 4318 is used for OTLP HTTP protocol. This release also adds support for collecting Kubernetes resource attributes to the Red Hat OpenShift distributed tracing data collection Operator. 1.4.3.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.5 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.36 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.56 1.4.4. New features and enhancements Red Hat OpenShift distributed tracing 2.4 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator. Self-provisioning, which means using the Red Hat OpenShift distributed tracing platform Operator to call the Red Hat Elasticsearch Operator during installation. Self provisioning is fully supported with this release. Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform to use the certificate is a Technology Preview for this release. Note When upgrading to Red Hat OpenShift distributed tracing 2.4, the Operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period. 1.4.4.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.4 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.34.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.49 1.4.5. New features and enhancements Red Hat OpenShift distributed tracing 2.3.1 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.5.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.3.1 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.2 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.44.1-1 1.4.6. New features and enhancements Red Hat OpenShift distributed tracing 2.3.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. With this release, the Red Hat OpenShift distributed tracing platform Operator is now installed to the openshift-distributed-tracing namespace by default. Before this update, the default installation had been in the openshift-operators namespace. 1.4.6.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.3.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.44.0 1.4.7. New features and enhancements Red Hat OpenShift distributed tracing 2.2.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.7.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.2.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.30.0 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.42.0 1.4.8. New features and enhancements Red Hat OpenShift distributed tracing 2.1.0 This release of Red Hat OpenShift distributed tracing addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.8.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.1.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.29.1 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.41.1 1.4.9. New features and enhancements Red Hat OpenShift distributed tracing 2.0.0 This release marks the rebranding of Red Hat OpenShift Jaeger to Red Hat OpenShift distributed tracing. This release consists of the following changes, additions, and improvements: Red Hat OpenShift distributed tracing now consists of the following two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Updates Red Hat OpenShift distributed tracing platform Operator to Jaeger 1.28. Going forward, Red Hat OpenShift distributed tracing will only support the stable Operator channel. Channels for individual releases are no longer supported. Introduces a new Red Hat OpenShift distributed tracing data collection Operator based on OpenTelemetry 0.33. Note that this Operator is a Technology Preview feature. Adds support for OpenTelemetry protocol (OTLP) to the Query service. Introduces a new distributed tracing icon that appears in the OpenShift OperatorHub. Includes rolling updates to the documentation to support the name change and new features. This release also addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.4.9.1. Component versions supported in Red Hat OpenShift distributed tracing version 2.0.0 Operator Component Version Red Hat OpenShift distributed tracing platform Jaeger 1.28.0 Red Hat OpenShift distributed tracing data collection OpenTelemetry 0.33.0 1.5. Red Hat OpenShift distributed tracing Technology Preview Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.5.1. Red Hat OpenShift distributed tracing 2.4.0 Technology Preview This release also adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator. Self-provisioning, which means using the Red Hat OpenShift distributed tracing platform Operator to call the Red Hat Elasticsearch Operator during installation. Self provisioning is fully supported with this release. Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform to use the certificate is a Technology Preview for this release. 1.5.2. Red Hat OpenShift distributed tracing 2.2.0 Technology Preview Unsupported OpenTelemetry Collector components included in the 2.1 release have been removed. 1.5.3. Red Hat OpenShift distributed tracing 2.1.0 Technology Preview This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. In the new version, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.5.4. Red Hat OpenShift distributed tracing 2.0.0 Technology Preview This release includes the addition of the Red Hat OpenShift distributed tracing data collection, which you install using the Red Hat OpenShift distributed tracing data collection Operator. Red Hat OpenShift distributed tracing data collection is based on the OpenTelemetry APIs and instrumentation. Red Hat OpenShift distributed tracing data collection includes the OpenTelemetry Operator and Collector. The Collector can be used to receive traces in either the OpenTelemetry or Jaeger protocol and send the trace data to Red Hat OpenShift distributed tracing. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.6. Red Hat OpenShift distributed tracing known issues These limitations exist in Red Hat OpenShift distributed tracing: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems. These are the known issues for Red Hat OpenShift distributed tracing: OBSDA-220 In some cases, if you try to pull an image using distributed tracing data collection, the image pull fails and a Failed to pull image error message appears. There is no workaround for this issue. TRACING-2057 The Kafka API has been updated to v1beta2 to support the Strimzi Kafka Operator 0.23.0. However, this API version is not supported by AMQ Streams 1.6.3. If you have the following environment, your Jaeger services will not be upgraded, and you cannot create new Jaeger services or modify existing Jaeger services: Jaeger Operator channel: 1.17.x stable or 1.20.x stable AMQ Streams Operator channel: amq-streams-1.6.x To resolve this issue, switch the subscription channel for your AMQ Streams Operator to either amq-streams-1.7.x or stable . 1.7. Red Hat OpenShift distributed tracing fixed issues OSSM-1910 Because of an issue introduced in version 2.6, TLS connections could not be established with OpenShift Container Platform Service Mesh. This update resolves the issue by changing the service port names to match conventions used by OpenShift Container Platform Service Mesh and Istio. OBSDA-208 Before this update, the default 200m CPU and 256Mi memory resource limits could cause distributed tracing data collection to restart continuously on large clusters. This update resolves the issue by removing these resource limits. OBSDA-222 Before this update, spans could be dropped in the OpenShift Container Platform distributed tracing platform. To help prevent this issue from occurring, this release updates version dependencies. TRACING-2337 Jaeger is logging a repetitive warning message in the Jaeger logs similar to the following: {"level":"warn","ts":1642438880.918793,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\\"\\\\x16\\\\x03\\\\x01\\\\x02\\\\x00\\\\x01\\\\x00\\\\x01\\\\xfc\\\\x03\\\\x03vw\\\\x1a\\\\xc9T\\\\xe7\\\\xdaCj\\\\xb7\\\\x8dK\\\\xa6\\\"\"","system":"grpc","grpc_log":true} This issue was resolved by exposing only the HTTP(S) port of the query service, and not the gRPC port. TRACING-2009 The Jaeger Operator has been updated to include support for the Strimzi Kafka Operator 0.23.0. TRACING-1907 The Jaeger agent sidecar injection was failing due to missing config maps in the application namespace. The config maps were getting automatically deleted due to an incorrect OwnerReference field setting and as a result, the application pods were not moving past the "ContainerCreating" stage. The incorrect settings have been removed. TRACING-1725 Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also BZ-1918920 . TRACING-1631 Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the OpenShift Elasticsearch Operator from communicating with all of the Elasticsearch clusters. TRACING-1300 Failed connection between Agent and Collector when using Istio sidecar. An update of the Jaeger Operator enabled TLS communication by default between a Jaeger sidecar agent and the Jaeger Collector. TRACING-1208 Authentication "500 Internal Error" when accessing Jaeger UI. When trying to authenticate to the UI using OAuth, I get a 500 error because oauth-proxy sidecar doesn't trust the custom CA bundle defined at installation time with the additionalTrustBundle . TRACING-1166 It is not currently possible to use the Jaeger streaming strategy within a disconnected environment. When a Kafka cluster is being provisioned, it results in a error: Failed to pull image registry.redhat.io/amq7/amq-streams-kafka-24-rhel7@sha256:f9ceca004f1b7dccb3b82d9a8027961f9fe4104e0ed69752c0bdd8078b4a1076 . TRACING-809 Jaeger Ingester is incompatible with Kafka 2.3. When there are two or more instances of the Jaeger Ingester and enough traffic it will continuously generate rebalancing messages in the logs. This is due to a regression in Kafka 2.3 that was fixed in Kafka 2.3.1. For more information, see Jaegertracing-1819 . BZ-1918920 / LOG-1619 The Elasticsearch pods does not get restarted automatically after an update. Workaround: Restart the pods manually.
[ "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/distributed_tracing/distr-tracing-release-notes
Chapter 3. Monitoring a Ceph storage cluster
Chapter 3. Monitoring a Ceph storage cluster As a storage administrator, you can monitor the overall health of the Red Hat Ceph Storage cluster, along with monitoring the health of the individual components of Ceph. Once you have a running Red Hat Ceph Storage cluster, you might begin monitoring the storage cluster to ensure that the Ceph Monitor and Ceph OSD daemons are running, at a high-level. Ceph storage cluster clients connect to a Ceph Monitor and receive the latest version of the storage cluster map before they can read and write data to the Ceph pools within the storage cluster. So the monitor cluster must have agreement on the state of the cluster before Ceph clients can read and write data. Ceph OSDs must peer the placement groups on the primary OSD with the copies of the placement groups on secondary OSDs. If faults arise, peering will reflect something other than the active + clean state. 3.1. Prerequisites A running Red Hat Ceph Storage cluster. 3.2. High-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of the Ceph daemons to ensure that they are up and running. High level monitoring also involves checking the storage cluster capacity to ensure that the storage cluster does not exceed its full ratio . The Red Hat Ceph Storage Dashboard is the most common way to conduct high-level monitoring. However, you can also use the command-line interface, the Ceph admin socket or the Ceph API to monitor the storage cluster. 3.2.1. Prerequisites A running Red Hat Ceph Storage cluster. 3.2.2. Using the Ceph command interface interactively You can interactively interface with the Ceph storage cluster by using the ceph command-line utility. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To run the ceph utility in interactive mode. Syntax Replace MONITOR_NAME with the name of the Ceph Monitor container, found by running the podman ps command. Example This example opens an interactive terminal session on mon.host01 , where you can start the Ceph interactive shell. 3.2.3. Checking the storage cluster health After you start the Ceph storage cluster, and before you start reading or writing data, check the storage cluster's health first. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example You can check on the health of the Ceph storage cluster with the following command: Example You can check the status of the Ceph storage cluster by running ceph status command: Example The output provides the following information: Cluster ID Cluster health status The monitor map epoch and the status of the monitor quorum. The OSD map epoch and the status of OSDs. The status of Ceph Managers. The status of Object Gateways. The placement group map version. The number of placement groups and pools. The notional amount of data stored and the number of objects stored. The total amount of data stored. Upon starting the Ceph cluster, you will likely encounter a health warning such as HEALTH_WARN XXX num placement groups stale . Wait a few moments and check it again. When the storage cluster is ready, ceph health should return a message such as HEALTH_OK . At that point, it is okay to begin using the cluster. 3.2.4. Watching storage cluster events You can watch events that are happening with the Ceph storage cluster using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To watch the cluster's ongoing events, run the following command: Example 3.2.5. How Ceph calculates data usage The used value reflects the actual amount of raw storage used. The xxx GB / xxx GB value means the amount available, the lesser of the two numbers, of the overall storage capacity of the cluster. The notional number reflects the size of the stored data before it is replicated, cloned or snapshotted. Therefore, the amount of data actually stored typically exceeds the notional amount stored, because Ceph creates replicas of the data and may also use storage capacity for cloning and snapshotting. 3.2.6. Understanding the storage clusters usage stats To check a cluster's data usage and data distribution among pools, use the df option. It is similar to the Linux df command. The SIZE / AVAIL / RAW USED in the ceph df and ceph status command output are different if some OSDs are marked OUT of the cluster compared to when all OSDs are IN . The SIZE / AVAIL / RAW USED is calculated from sum of SIZE (osd disk size), RAW USE (total used space on disk), and AVAIL of all OSDs which are in IN state. You can see the total of SIZE / AVAIL / RAW USED for all OSDs in ceph osd df tree command output. Example The ceph df detail command gives more details about other pool statistics such as quota objects, quota bytes, used compression, and under compression. The RAW STORAGE section of the output provides an overview of the amount of storage the storage cluster manages for data. CLASS: The class of OSD device. SIZE: The amount of storage capacity managed by the storage cluster. In the above example, if the SIZE is 90 GiB, it is the total size without the replication factor, which is three by default. The total available capacity with the replication factor is 90 GiB/3 = 30 GiB. Based on the full ratio, which is 0.85% by default, the maximum available space is 30 GiB * 0.85 = 25.5 GiB AVAIL: The amount of free space available in the storage cluster. In the above example, if the SIZE is 90 GiB and the USED space is 6 GiB, then the AVAIL space is 84 GiB. The total available space with the replication factor, which is three by default, is 84 GiB/3 = 28 GiB USED: The amount of raw storage consumed by user data. In the above example, 100 MiB is the total space available after considering the replication factor. The actual available size is 33 MiB. RAW USED: The amount of raw storage consumed by user data, internal overhead, or reserved capacity. % RAW USED: The percentage of RAW USED . Use this number in conjunction with the full ratio and near full ratio to ensure that you are not reaching the storage cluster's capacity. The POOLS section of the output provides a list of pools and the notional usage of each pool. The output from this section DOES NOT reflect replicas, clones or snapshots. For example, if you store an object with 1 MB of data, the notional usage will be 1 MB, but the actual usage may be 3 MB or more depending on the number of replicas for example, size = 3 , clones and snapshots. POOL: The name of the pool. ID: The pool ID. STORED: The actual amount of data stored by the user in the pool. This value changes based on the raw usage data based on (k+M)/K values, number of object copies, and the number of objects degraded at the time of pool stats calculation. OBJECTS: The notional number of objects stored per pool. It is STORED size * replication factor. USED: The notional amount of data stored in kilobytes, unless the number appends M for megabytes or G for gigabytes. %USED: The notional percentage of storage used per pool. MAX AVAIL: An estimate of the notional amount of data that can be written to this pool. It is the amount of data that can be used before the first OSD becomes full. It considers the projected distribution of data across disks from the CRUSH map and uses the first OSD to fill up as the target. In the above example, MAX AVAIL is 153.85 MB without considering the replication factor, which is three by default. See the Red Hat Knowledgebase article titled ceph df MAX AVAIL is incorrect for simple replicated pool to calculate the value of MAX AVAIL . QUOTA OBJECTS: The number of quota objects. QUOTA BYTES: The number of bytes in the quota objects. USED COMPR: The amount of space allocated for compressed data including his includes compressed data, allocation, replication and erasure coding overhead. UNDER COMPR: The amount of data passed through compression and beneficial enough to be stored in a compressed form. Note The numbers in the POOLS section are notional. They are not inclusive of the number of replicas, snapshots or clones. As a result, the sum of the USED and %USED amounts will not add up to the RAW USED and %RAW USED amounts in the GLOBAL section of the output. Note The MAX AVAIL value is a complicated function of the replication or erasure code used, the CRUSH rule that maps storage to devices, the utilization of those devices, and the configured mon_osd_full_ratio . Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. 3.2.7. Understanding the OSD usage stats Use the ceph osd df command to view OSD utilization stats. Example ID: The name of the OSD. CLASS: The type of devices the OSD uses. WEIGHT: The weight of the OSD in the CRUSH map. REWEIGHT: The default reweight value. SIZE: The overall storage capacity of the OSD. USE: The OSD capacity. DATA: The amount of OSD capacity that is used by user data. OMAP: An estimate value of the bluefs storage that is being used to store object map ( omap ) data (key value pairs stored in rocksdb ). META: The bluefs space allocated, or the value set in the bluestore_bluefs_min parameter, whichever is larger, for internal metadata which is calculated as the total space allocated in bluefs minus the estimated omap data size. AVAIL: The amount of free space available on the OSD. %USE: The notional percentage of storage used by the OSD VAR: The variation above or below average utilization. PGS: The number of placement groups in the OSD. MIN/MAX VAR: The minimum and maximum variation across all OSDs. Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. See CRUSH Weights in Red Hat Ceph Storage Storage Strategies Guide for details. 3.2.8. Checking the storage cluster status You can check the status of the Red Hat Ceph Storage cluster from the command-line interface. The status sub command or the -s argument will display the current status of the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To check a storage cluster's status, execute the following: Example Or Example In interactive mode, type ceph and press Enter : Example 3.2.9. Checking the Ceph Monitor status If the storage cluster has multiple Ceph Monitors, which is a requirement for a production Red Hat Ceph Storage cluster, then you can check the Ceph Monitor quorum status after starting the storage cluster, and before doing any reading or writing of data. A quorum must be present when multiple Ceph Monitors are running. Check the Ceph Monitor status periodically to ensure that they are running. If there is a problem with the Ceph Monitor, that prevents an agreement on the state of the storage cluster, the fault can prevent Ceph clients from reading and writing data. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To display the Ceph Monitor map, execute the following: Example or Example To check the quorum status for the storage cluster, execute the following: Ceph returns the quorum status. Example 3.2.10. Using the Ceph administration socket Use the administration socket to interact with a given daemon directly by using a UNIX socket file. For example, the socket enables you to: List the Ceph configuration at runtime Set configuration values at runtime directly without relying on Monitors. This is useful when Monitors are down . Dump historic operations Dump the operation priority queue state Dump operations without rebooting Dump performance counters In addition, using the socket is helpful when troubleshooting problems related to Ceph Monitors or OSDs. Regardless, if the daemon is not running, a following error is returned when attempting to use the administration socket: Important The administration socket is only available while a daemon is running. When you shut down the daemon properly, the administration socket is removed. However, if the daemon terminates unexpectedly, the administration socket might persist. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To use the socket: Syntax Replace: MONITOR_ID of the daemon COMMAND with the command to run. Use help to list the available commands for a given daemon. To view the status of a Ceph Monitor: Example Example Alternatively, specify the Ceph daemon by using its socket file: Syntax To view the status of an Ceph OSD named osd.2 : Example To list all socket files for the Ceph processes: Example Additional Resources See the Red Hat Ceph Storage Troubleshooting Guide for more information. 3.2.11. Understanding the Ceph OSD status A Ceph OSD's status is either in the storage cluster, or out of the storage cluster. It is either up and running, or it is down and not running. If a Ceph OSD is up , it can be either in the storage cluster, where data can be read and written, or it is out of the storage cluster. If it was in the storage cluster and recently moved out of the storage cluster, Ceph starts migrating placement groups to other Ceph OSDs. If a Ceph OSD is out of the storage cluster, CRUSH will not assign placement groups to the Ceph OSD. If a Ceph OSD is down , it should also be out . Note If a Ceph OSD is down and in , there is a problem, and the storage cluster will not be in a healthy state. If you execute a command such as ceph health , ceph -s or ceph -w , you might notice that the storage cluster does not always echo back HEALTH OK . Do not panic. With respect to Ceph OSDs, you can expect that the storage cluster will NOT echo HEALTH OK in a few expected circumstances: You have not started the storage cluster yet, and it is not responding. You have just started or restarted the storage cluster, and it is not ready yet, because the placement groups are getting created and the Ceph OSDs are in the process of peering. You just added or removed a Ceph OSD. You just modified the storage cluster map. An important aspect of monitoring Ceph OSDs is to ensure that when the storage cluster is up and running that all Ceph OSDs that are in the storage cluster are up and running, too. To see if all OSDs are running, execute: Example or Example The result should tell you the map epoch, eNNNN , the total number of OSDs, x , how many, y , are up , and how many, z , are in : If the number of Ceph OSDs that are in the storage cluster are more than the number of Ceph OSDs that are up . Execute the following command to identify the ceph-osd daemons that are not running: Example Tip The ability to search through a well-designed CRUSH hierarchy can help you troubleshoot the storage cluster by identifying the physical locations faster. If a Ceph OSD is down , connect to the node and start it. You can use Red Hat Storage Console to restart the Ceph OSD daemon, or you can use the command line. Syntax Example Additional Resources See the Red Hat Ceph Storage Dashboard Guide for more details. 3.3. Low-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of a Red Hat Ceph Storage cluster from a low-level perspective. Low-level monitoring typically involves ensuring that Ceph OSDs are peering properly. When peering faults occur, placement groups operate in a degraded state. This degraded state can be the result of many different things, such as hardware failure, a hung or crashed Ceph daemon, network latency, or a complete site outage. 3.3.1. Prerequisites A running Red Hat Ceph Storage cluster. 3.3.2. Monitoring Placement Group Sets When CRUSH assigns placement groups to Ceph OSDs, it looks at the number of replicas for the pool and assigns the placement group to Ceph OSDs such that each replica of the placement group gets assigned to a different Ceph OSD. For example, if the pool requires three replicas of a placement group, CRUSH may assign them to osd.1 , osd.2 and osd.3 respectively. CRUSH actually seeks a pseudo-random placement that will take into account failure domains you set in the CRUSH map, so you will rarely see placement groups assigned to nearest neighbor Ceph OSDs in a large cluster. We refer to the set of Ceph OSDs that should contain the replicas of a particular placement group as the Acting Set . In some cases, an OSD in the Acting Set is down or otherwise not able to service requests for objects in the placement group. When these situations arise, do not panic. Common examples include: You added or removed an OSD. Then, CRUSH reassigned the placement group to other Ceph OSDs, thereby changing the composition of the acting set and spawning the migration of data with a "backfill" process. A Ceph OSD was down , was restarted and is now recovering . A Ceph OSD in the acting set is down or unable to service requests, and another Ceph OSD has temporarily assumed its duties. Ceph processes a client request using the Up Set , which is the set of Ceph OSDs that actually handle the requests. In most cases, the up set and the Acting Set are virtually identical. When they are not, it can indicate that Ceph is migrating data, an Ceph OSD is recovering, or that there is a problem, that is, Ceph usually echoes a HEALTH WARN state with a "stuck stale" message in such scenarios. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To retrieve a list of placement groups: Example View which Ceph OSDs are in the Acting Set or in the Up Set for a given placement group: Syntax Example Note If the Up Set and Acting Set do not match, this may be an indicator that the storage cluster rebalancing itself or of a potential problem with the storage cluster. 3.3.3. Ceph OSD peering Before you can write data to a placement group, it must be in an active state, and it should be in a clean state. For Ceph to determine the current state of a placement group, the primary OSD of the placement group that is, the first OSD in the acting set, peers with the secondary and tertiary OSDs to establish agreement on the current state of the placement group. Assuming a pool with three replicas of the PG. Figure 3.1. Peering 3.3.4. Placement Group States If you execute a command such as ceph health , ceph -s or ceph -w , you may notice that the cluster does not always echo back HEALTH OK . After you check to see if the OSDs are running, you should also check placement group states. You should expect that the cluster will NOT echo HEALTH OK in a number of placement group peering-related circumstances: You have just created a pool and placement groups haven't peered yet. The placement groups are recovering. You have just added an OSD to or removed an OSD from the cluster. You have just modified the CRUSH map and the placement groups are migrating. There is inconsistent data in different replicas of a placement group. Ceph is scrubbing a placement group's replicas. Ceph doesn't have enough storage capacity to complete backfilling operations. If one of the foregoing circumstances causes Ceph to echo HEALTH WARN , don't panic. In many cases, the cluster will recover on its own. In some cases, you may need to take action. An important aspect of monitoring placement groups is to ensure that when the cluster is up and running that all placement groups are active , and preferably in the clean state. To see the status of all placement groups, execute: Example The result should tell you the placement group map version, vNNNNNN , the total number of placement groups, x , and how many placement groups, y , are in a particular state such as active+clean : Note It is common for Ceph to report multiple states for placement groups. Snapshot Trimming PG States When snapshots exist, two additional PG states will be reported. snaptrim : The PGs are currently being trimmed snaptrim_wait : The PGs are waiting to be trimmed Example Output: In addition to the placement group states, Ceph will also echo back the amount of data used, aa , the amount of storage capacity remaining, bb , and the total storage capacity for the placement group. These numbers can be important in a few cases: You are reaching the near full ratio or full ratio . Your data isn't getting distributed across the cluster due to an error in the CRUSH configuration. Placement Group IDs Placement group IDs consist of the pool number, and not the pool name, followed by a period (.) and the placement group ID- a hexadecimal number. You can view pool numbers and their names from the output of ceph osd lspools . The default pool names data , metadata and rbd correspond to pool numbers 0 , 1 and 2 respectively. A fully qualified placement group ID has the following form: Syntax Example output: To retrieve a list of placement groups: Example To format the output in JSON format and save it to a file: Syntax Example Query a particular placement group: Syntax Example Additional Resources See the chapter Object Storage Daemon (OSD) configuration options in the OSD Object storage daemon configuratopn options section in Red Hat Ceph Storage Configuration Guide for more details on the snapshot trimming settings. 3.3.5. Placement Group creating state When you create a pool, it will create the number of placement groups you specified. Ceph will echo creating when it is creating one or more placement groups. Once they are created, the OSDs that are part of a placement group's Acting Set will peer. Once peering is complete, the placement group status should be active+clean , which means a Ceph client can begin writing to the placement group. 3.3.6. Placement group peering state When Ceph is Peering a placement group, Ceph is bringing the OSDs that store the replicas of the placement group into agreement about the state of the objects and metadata in the placement group. When Ceph completes peering, this means that the OSDs that store the placement group agree about the current state of the placement group. However, completion of the peering process does NOT mean that each replica has the latest contents. Authoritative History Ceph will NOT acknowledge a write operation to a client, until all OSDs of the acting set persist the write operation. This practice ensures that at least one member of the acting set will have a record of every acknowledged write operation since the last successful peering operation. With an accurate record of each acknowledged write operation, Ceph can construct and disseminate a new authoritative history of the placement group. A complete, and fully ordered set of operations that, if performed, would bring an OSD's copy of a placement group up to date. 3.3.7. Placement group active state Once Ceph completes the peering process, a placement group may become active . The active state means that the data in the placement group is generally available in the primary placement group and the replicas for read and write operations. 3.3.8. Placement Group clean state When a placement group is in the clean state, the primary OSD and the replica OSDs have successfully peered and there are no stray replicas for the placement group. Ceph replicated all objects in the placement group the correct number of times. 3.3.9. Placement Group degraded state When a client writes an object to the primary OSD, the primary OSD is responsible for writing the replicas to the replica OSDs. After the primary OSD writes the object to storage, the placement group will remain in a degraded state until the primary OSD has received an acknowledgement from the replica OSDs that Ceph created the replica objects successfully. The reason a placement group can be active+degraded is that an OSD may be active even though it doesn't hold all of the objects yet. If an OSD goes down , Ceph marks each placement group assigned to the OSD as degraded . The Ceph OSDs must peer again when the Ceph OSD comes back online. However, a client can still write a new object to a degraded placement group if it is active . If an OSD is down and the degraded condition persists, Ceph may mark the down OSD as out of the cluster and remap the data from the down OSD to another OSD. The time between being marked down and being marked out is controlled by mon_osd_down_out_interval , which is set to 600 seconds by default. A placement group can also be degraded , because Ceph cannot find one or more objects that Ceph thinks should be in the placement group. While you cannot read or write to unfound objects, you can still access all of the other objects in the degraded placement group. For example, if there are nine OSDs in a three way replica pool. If OSD number 9 goes down, the PGs assigned to OSD 9 goes into a degraded state. If OSD 9 does not recover, it goes out of the storage cluster and the storage cluster rebalances. In that scenario, the PGs are degraded and then recover to an active state. 3.3.10. Placement Group recovering state Ceph was designed for fault-tolerance at a scale where hardware and software problems are ongoing. When an OSD goes down , its contents may fall behind the current state of other replicas in the placement groups. When the OSD is back up , the contents of the placement groups must be updated to reflect the current state. During that time period, the OSD may reflect a recovering state. Recovery is not always trivial, because a hardware failure might cause a cascading failure of multiple Ceph OSDs. For example, a network switch for a rack or cabinet may fail, which can cause the OSDs of a number of host machines to fall behind the current state of the storage cluster. Each one of the OSDs must recover once the fault is resolved. Ceph provides a number of settings to balance the resource contention between new service requests and the need to recover data objects and restore the placement groups to the current state. The osd recovery delay start setting allows an OSD to restart, re-peer and even process some replay requests before starting the recovery process. The osd recovery threads setting limits the number of threads for the recovery process, by default one thread. The osd recovery thread timeout sets a thread timeout, because multiple Ceph OSDs can fail, restart and re-peer at staggered rates. The osd recovery max active setting limits the number of recovery requests a Ceph OSD works on simultaneously to prevent the Ceph OSD from failing to serve . The osd recovery max chunk setting limits the size of the recovered data chunks to prevent network congestion. 3.3.11. Back fill state When a new Ceph OSD joins the storage cluster, CRUSH will reassign placement groups from OSDs in the cluster to the newly added Ceph OSD. Forcing the new OSD to accept the reassigned placement groups immediately can put excessive load on the new Ceph OSD. Backfilling the OSD with the placement groups allows this process to begin in the background. Once backfilling is complete, the new OSD will begin serving requests when it is ready. During the backfill operations, you might see one of several states: backfill_wait indicates that a backfill operation is pending, but isn't underway yet backfill indicates that a backfill operation is underway backfill_too_full indicates that a backfill operation was requested, but couldn't be completed due to insufficient storage capacity. When a placement group cannot be backfilled, it can be considered incomplete . Ceph provides a number of settings to manage the load spike associated with reassigning placement groups to a Ceph OSD, especially a new Ceph OSD. By default, osd_max_backfills sets the maximum number of concurrent backfills to or from a Ceph OSD to 10. The osd backfill full ratio enables a Ceph OSD to refuse a backfill request if the OSD is approaching its full ratio, by default 85%. If an OSD refuses a backfill request, the osd backfill retry interval enables an OSD to retry the request, by default after 10 seconds. OSDs can also set osd backfill scan min and osd backfill scan max to manage scan intervals, by default 64 and 512. For some workloads, it is beneficial to avoid regular recovery entirely and use backfill instead. Since backfilling occurs in the background, this allows I/O to proceed on the objects in the OSD. You can force a backfill rather than a recovery by setting the osd_min_pg_log_entries option to 1 , and setting the osd_max_pg_log_entries option to 2 . Contact your Red Hat Support account team for details on when this situation is appropriate for your workload. 3.3.12. Placement Group remapped state When the Acting Set that services a placement group changes, the data migrates from the old acting set to the new acting set. It may take some time for a new primary OSD to service requests. So it may ask the old primary to continue to service requests until the placement group migration is complete. Once data migration completes, the mapping uses the primary OSD of the new acting set. 3.3.13. Placement Group stale state While Ceph uses heartbeats to ensure that hosts and daemons are running, the ceph-osd daemons may also get into a stuck state where they aren't reporting statistics in a timely manner. For example, a temporary network fault. By default, OSD daemons report their placement group, up thru, boot and failure statistics every half second, that is, 0.5 , which is more frequent than the heartbeat thresholds. If the Primary OSD of a placement group's acting set fails to report to the monitor or if other OSDs have reported the primary OSD down , the monitors will mark the placement group stale . When you start the storage cluster, it is common to see the stale state until the peering process completes. After the storage cluster has been running for awhile, seeing placement groups in the stale state indicates that the primary OSD for those placement groups is down or not reporting placement group statistics to the monitor. 3.3.14. Placement Group misplaced state There are some temporary backfilling scenarios where a PG gets mapped temporarily to an OSD. When that temporary situation should no longer be the case, the PGs might still reside in the temporary location and not in the proper location. In which case, they are said to be misplaced . That's because the correct number of extra copies actually exist, but one or more copies is in the wrong place. For example, there are 3 OSDs: 0,1,2 and all PGs map to some permutation of those three. If you add another OSD (OSD 3), some PGs will now map to OSD 3 instead of one of the others. However, until OSD 3 is backfilled, the PG will have a temporary mapping allowing it to continue to serve I/O from the old mapping. During that time, the PG is misplaced , because it has a temporary mapping, but not degraded , since there are 3 copies. Example [0,1,2] is a temporary mapping, so the up set is not equal to the acting set and the PG is misplaced but not degraded since [0,1,2] is still three copies. Example OSD 3 is now backfilled and the temporary mapping is removed, not degraded and not misplaced. 3.3.15. Placement Group incomplete state A PG goes into a incomplete state when there is incomplete content and peering fails, that is, when there are no complete OSDs which are current enough to perform recovery. Lets say OSD 1, 2, and 3 are the acting OSD set and it switches to OSD 1, 4, and 3, then osd.1 will request a temporary acting set of OSD 1, 2, and 3 while backfilling 4. During this time, if OSD 1, 2, and 3 all go down, osd.4 will be the only one left which might not have fully backfilled all the data. At this time, the PG will go incomplete indicating that there are no complete OSDs which are current enough to perform recovery. Alternately, if osd.4 is not involved and the acting set is simply OSD 1, 2, and 3 when OSD 1, 2, and 3 go down, the PG would likely go stale indicating that the mons have not heard anything on that PG since the acting set changed. The reason being there are no OSDs left to notify the new OSDs. 3.3.16. Identifying stuck Placement Groups A placement group is not necessarily problematic just because it is not in a active+clean state. Generally, Ceph's ability to self repair might not be working when placement groups get stuck. The stuck states include: Unclean : Placement groups contain objects that are not replicated the desired number of times. They should be recovering. Inactive : Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come back up . Stale : Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while, and can be configured with the mon osd report timeout setting. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To identify stuck placement groups, execute the following: Syntax Example 3.3.17. Finding an object's location The Ceph client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to an OSD dynamically. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To find the object location, all you need is the object name and the pool name: Syntax Example
[ "exec -it ceph-mon- MONITOR_NAME /bin/bash", "podman exec -it ceph-499829b4-832f-11eb-8d6d-001a4a000635-mon.host01 /bin/bash", "root@host01 ~]# cephadm shell", "ceph health HEALTH_OK", "ceph status", "root@host01 ~]# cephadm shell", "ceph -w cluster: id: 8c9b0072-67ca-11eb-af06-001a4a0002a0 health: HEALTH_OK services: mon: 2 daemons, quorum Ceph5-2,Ceph5-adm (age 3d) mgr: Ceph5-1.nqikfh(active, since 3w), standbys: Ceph5-adm.meckej osd: 5 osds: 5 up (since 2d), 5 in (since 8w) rgw: 2 daemons active (test_realm.test_zone.Ceph5-2.bfdwcn, test_realm.test_zone.Ceph5-adm.acndrh) data: pools: 11 pools, 273 pgs objects: 459 objects, 32 KiB usage: 2.6 GiB used, 72 GiB / 75 GiB avail pgs: 273 active+clean io: client: 170 B/s rd, 730 KiB/s wr, 0 op/s rd, 729 op/s wr 2021-06-02 15:45:21.655871 osd.0 [INF] 17.71 deep-scrub ok 2021-06-02 15:45:47.880608 osd.1 [INF] 1.0 scrub ok 2021-06-02 15:45:48.865375 osd.1 [INF] 1.3 scrub ok 2021-06-02 15:45:50.866479 osd.1 [INF] 1.4 scrub ok 2021-06-02 15:45:01.345821 mon.0 [INF] pgmap v41339: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:05.718640 mon.0 [INF] pgmap v41340: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:53.997726 osd.1 [INF] 1.5 scrub ok 2021-06-02 15:45:06.734270 mon.0 [INF] pgmap v41341: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:15.722456 mon.0 [INF] pgmap v41342: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:46:06.836430 osd.0 [INF] 17.75 deep-scrub ok 2021-06-02 15:45:55.720929 mon.0 [INF] pgmap v41343: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail", "ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 TOTAL 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .mgr 1 1 5.3 MiB 3 16 MiB 0 629 GiB .rgw.root 2 32 1.3 KiB 4 48 KiB 0 629 GiB default.rgw.log 3 32 3.6 KiB 209 408 KiB 0 629 GiB default.rgw.control 4 32 0 B 8 0 B 0 629 GiB default.rgw.meta 5 32 1.7 KiB 10 96 KiB 0 629 GiB default.rgw.buckets.index 7 32 5.5 MiB 22 17 MiB 0 629 GiB default.rgw.buckets.data 8 32 807 KiB 3 2.4 MiB 0 629 GiB default.rgw.buckets.non-ec 9 32 1.0 MiB 1 3.1 MiB 0 629 GiB source-ecpool-86 11 32 1.2 TiB 391.13k 2.1 TiB 53.49 1.1 TiB", "ceph osd df ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS 3 hdd 0.90959 1.00000 931GiB 70.1GiB 69.1GiB 0B 1GiB 861GiB 7.53 2.93 66 4 hdd 0.90959 1.00000 931GiB 1.30GiB 308MiB 0B 1GiB 930GiB 0.14 0.05 59 0 hdd 0.90959 1.00000 931GiB 18.1GiB 17.1GiB 0B 1GiB 913GiB 1.94 0.76 57 MIN/MAX VAR: 0.02/2.98 STDDEV: 2.91", "cephadm shell", "ceph status", "ceph -s", "ceph ceph> status cluster: id: 499829b4-832f-11eb-8d6d-001a4a000635 health: HEALTH_WARN 1 stray daemon(s) not managed by cephadm 1/3 mons down, quorum host03,host02 too many PGs per OSD (261 > max 250) services: mon: 3 daemons, quorum host03,host02 (age 3d), out of quorum: host01 mgr: host01.hdhzwn(active, since 9d), standbys: host05.eobuuv, host06.wquwpj osd: 12 osds: 11 up (since 2w), 11 in (since 5w) rgw: 2 daemons active (test_realm.test_zone.host04.hgbvnq, test_realm.test_zone.host05.yqqilm) rgw-nfs: 1 daemon active (nfs.foo.host06-rgw) data: pools: 8 pools, 960 pgs objects: 414 objects, 1.0 MiB usage: 5.7 GiB used, 214 GiB / 220 GiB avail pgs: 960 active+clean io: client: 41 KiB/s rd, 0 B/s wr, 41 op/s rd, 27 op/s wr ceph> health HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1/3 mons down, quorum host03,host02; too many PGs per OSD (261 > max 250) ceph> mon stat e3: 3 mons at {host01=[v2:10.74.255.0:3300/0,v1:10.74.255.0:6789/0],host02=[v2:10.74.249.253:3300/0,v1:10.74.249.253:6789/0],host03=[v2:10.74.251.164:3300/0,v1:10.74.251.164:6789/0]}, election epoch 6688, leader 1 host03, quorum 1,2 host03,host02", "cephadm shell", "ceph mon stat", "ceph mon dump", "ceph quorum_status -f json-pretty", "{ \"election_epoch\": 6686, \"quorum\": [ 0, 1, 2 ], \"quorum_names\": [ \"host01\", \"host03\", \"host02\" ], \"quorum_leader_name\": \"host01\", \"quorum_age\": 424884, \"features\": { \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"monmap\": { \"epoch\": 3, \"fsid\": \"499829b4-832f-11eb-8d6d-001a4a000635\", \"modified\": \"2021-03-15T04:51:38.621737Z\", \"created\": \"2021-03-12T12:35:16.911339Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.255.0:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.255.0:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.255.0:6789/0\", \"public_addr\": \"10.74.255.0:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.251.164:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.251.164:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.251.164:6789/0\", \"public_addr\": \"10.74.251.164:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.253:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.253:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.253:6789/0\", \"public_addr\": \"10.74.249.253:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] } }", "Error 111: Connection Refused", "cephadm shell", "ceph daemon MONITOR_ID COMMAND", "ceph daemon mon.host01 help { \"add_bootstrap_peer_hint\": \"add peer address as potential bootstrap peer for cluster bringup\", \"add_bootstrap_peer_hintv\": \"add peer address vector as potential bootstrap peer for cluster bringup\", \"compact\": \"cause compaction of monitor's leveldb/rocksdb storage\", \"config diff\": \"dump diff of current config and default config\", \"config diff get\": \"dump diff get <field>: dump diff of current and default config setting <field>\", \"config get\": \"config get <field>: get the config value\", \"config help\": \"get config setting schema and descriptions\", \"config set\": \"config set <field> <val> [<val> ...]: set a config variable\", \"config show\": \"dump current config settings\", \"config unset\": \"config unset <field>: unset a config variable\", \"connection scores dump\": \"show the scores used in connectivity-based elections\", \"connection scores reset\": \"reset the scores used in connectivity-based elections\", \"dump_historic_ops\": \"dump_historic_ops\", \"dump_mempools\": \"get mempool stats\", \"get_command_descriptions\": \"list available commands\", \"git_version\": \"get git sha1\", \"heap\": \"show heap usage info (available only if compiled with tcmalloc)\", \"help\": \"list available commands\", \"injectargs\": \"inject configuration arguments into running daemon\", \"log dump\": \"dump recent log entries to log file\", \"log flush\": \"flush log entries to log file\", \"log reopen\": \"reopen log file\", \"mon_status\": \"report status of monitors\", \"ops\": \"show the ops currently in flight\", \"perf dump\": \"dump perfcounters value\", \"perf histogram dump\": \"dump perf histogram values\", \"perf histogram schema\": \"dump perf histogram schema\", \"perf reset\": \"perf reset <name>: perf reset all or one perfcounter name\", \"perf schema\": \"dump perfcounters schema\", \"quorum enter\": \"force monitor back into quorum\", \"quorum exit\": \"force monitor out of the quorum\", \"sessions\": \"list existing sessions\", \"smart\": \"Query health metrics for underlying device\", \"sync_force\": \"force sync of and clear monitor store\", \"version\": \"get ceph version\" }", "ceph daemon mon.host01 mon_status { \"name\": \"host01\", \"rank\": 0, \"state\": \"leader\", \"election_epoch\": 120, \"quorum\": [ 0, 1, 2 ], \"quorum_age\": 206358, \"features\": { \"required_con\": \"2449958747317026820\", \"required_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 3, \"fsid\": \"81a4597a-b711-11eb-8cb8-001a4a000740\", \"modified\": \"2021-05-18T05:50:17.782128Z\", \"created\": \"2021-05-17T13:13:13.383313Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.41:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.41:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.41:6789/0\", \"public_addr\": \"10.74.249.41:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.55:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.55:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.55:6789/0\", \"public_addr\": \"10.74.249.55:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.49:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.49:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.49:6789/0\", \"public_addr\": \"10.74.249.49:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] }, \"feature_map\": { \"mon\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 1 } ], \"osd\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 3 } ] }, \"stretch_mode\": false }", "ceph daemon /var/run/ceph/ SOCKET_FILE COMMAND", "ceph daemon /var/run/ceph/ceph-osd.2.asok status", "ls /var/run/ceph", "ceph osd stat", "ceph osd dump", "eNNNN: x osds: y up, z in", "ceph osd tree id weight type name up/down reweight -1 3 pool default -3 3 rack mainrack -2 3 host osd-host 0 1 osd.0 up 1 1 1 osd.1 up 1 2 1 osd.2 up 1", "systemctl start CEPH_OSD_SERVICE_ID", "systemctl start [email protected]", "cephadm shell", "ceph pg dump", "ceph pg map PG_NUM", "ceph pg map 128", "ceph pg stat", "vNNNNNN: x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail", "244 active+clean+snaptrim_wait 32 active+clean+snaptrim", "POOL_NUM . PG_ID", "0.1f", "ceph pg dump", "ceph pg dump -o FILE_NAME --format=json", "ceph pg dump -o test --format=json", "ceph pg POOL_NUM . PG_ID query", "ceph pg 5.fe query { \"snap_trimq\": \"[]\", \"snap_trimq_len\": 0, \"state\": \"active+clean\", \"epoch\": 2449, \"up\": [ 3, 8, 10 ], \"acting\": [ 3, 8, 10 ], \"acting_recovery_backfill\": [ \"3\", \"8\", \"10\" ], \"info\": { \"pgid\": \"5.ff\", \"last_update\": \"0'0\", \"last_complete\": \"0'0\", \"log_tail\": \"0'0\", \"last_user_version\": 0, \"last_backfill\": \"MAX\", \"purged_snaps\": [], \"history\": { \"epoch_created\": 114, \"epoch_pool_created\": 82, \"last_epoch_started\": 2402, \"last_interval_started\": 2401, \"last_epoch_clean\": 2402, \"last_interval_clean\": 2401, \"last_epoch_split\": 114, \"last_epoch_marked_full\": 0, \"same_up_since\": 2401, \"same_interval_since\": 2401, \"same_primary_since\": 2086, \"last_scrub\": \"0'0\", \"last_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_deep_scrub\": \"0'0\", \"last_deep_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_clean_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"prior_readable_until_ub\": 0 }, \"stats\": { \"version\": \"0'0\", \"reported_seq\": \"2989\", \"reported_epoch\": \"2449\", \"state\": \"active+clean\", \"last_fresh\": \"2021-06-18T05:16:59.401080+0000\", \"last_change\": \"2021-06-17T01:32:03.764162+0000\", \"last_active\": \"2021-06-18T05:16:59.401080+0000\", .", "pg 1.5: up=acting: [0,1,2] ADD_OSD_3 pg 1.5: up: [0,3,1] acting: [0,1,2]", "pg 1.5: up=acting: [0,3,1]", "ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}", "ceph pg dump_stuck stale OK", "ceph osd map POOL_NAME OBJECT_NAME", "ceph osd map mypool myobject" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/administration_guide/monitoring-a-ceph-storage-cluster
Chapter 2. Support
Chapter 2. Support Only the configuration options described in this documentation are supported for logging. Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift Service on AWS releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. Note If you must perform configurations not described in the Red Hat OpenShift Service on AWS documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged . An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed . Note Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift Service on AWS. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. Logging is not: A high scale log collection system Security Information and Event Monitoring (SIEM) compliant A "bring your own" (BYO) log collector configuration Historical or long term log retention or storage A guaranteed log sink Secure storage - audit logs are not stored by default 2.1. Supported API custom resource definitions The following table describes the supported Logging APIs. Table 2.1. Loki API support states CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported from 5.5 RulerConfig rulerconfig.loki.grafana/v1 Supported from 5.7 AlertingRule alertingrule.loki.grafana/v1 Supported from 5.7 RecordingRule recordingrule.loki.grafana/v1 Supported from 5.7 LogFileMetricExporter LogFileMetricExporter.logging.openshift.io/v1alpha1 Supported from 5.8 ClusterLogForwarder clusterlogforwarder.logging.openshift.io/v1 Supported from 4.5. 2.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: The fluent.conf file The Fluentd daemon set The vector.toml file for Vector collector deployments Explicitly unsupported cases include: Configuring the collected log location . You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log . Throttling log collection . You cannot throttle down the rate at which the logs are read in by the log collector. Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 2.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 2.4. Support exception for the Logging UI Plugin Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on Red Hat OpenShift Service on AWS 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA. 2.5. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both Red Hat OpenShift Service on AWS and logging. 2.5.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your logging, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 2.5.2. Collecting logging data You can use the oc adm must-gather CLI command to collect information about logging. Procedure To collect logging information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal .
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/support
E.3.6. /proc/irq/
E.3.6. /proc/irq/ This directory is used to set IRQ to CPU affinity, which allows the system to connect a particular IRQ to only one CPU. Alternatively, it can exclude a CPU from handling any IRQs. Each IRQ has its own directory, allowing for the individual configuration of each IRQ. The /proc/irq/prof_cpu_mask file is a bitmask that contains the default values for the smp_affinity file in the IRQ directory. The values in smp_affinity specify which CPUs handle that particular IRQ. For more information about the /proc/irq/ directory, see the following installed documentation:
[ "/usr/share/doc/kernel-doc- kernel_version /Documentation/filesystems/proc.txt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-dir-irq
Chapter 4. Installing the Red Hat Integration - AMQ Interconnect Operator in a restricted environment
Chapter 4. Installing the Red Hat Integration - AMQ Interconnect Operator in a restricted environment In a production environment which has no or limited internet access, installing the Red Hat Integration - AMQ Interconnect Operator as described in Chapter 3, Adding the Red Hat Integration - AMQ Interconnect Operator is not possible. This section explains how to install Red Hat Integration - AMQ Interconnect Operator in a restricted environment by mirroring the required components to the cluster. Prerequisites A OpenShift Container Platform cluster version 4.6, 4.7, 4.8, 4.9 or 4.10 A RHEL machine with: podman version 1.9.3 or later The opm CLI as described in the OpenShift documentation The oc CLI version 4.9.9 or later Network access network access to the Red Hat Container Registry network access to the OpenShift Container Platform cluster Note You only need access to the Red Hat Container Registry while mirroring. You do not need simultaneous access to the Red Hat Container Registry and the OpenShift Container Platform cluster. The steps required are described in the following sections: Section 4.1, "Setting up the OpenShift Container Platform cluster" Section 4.2, "Creating the AMQ Interconnect images on a RHEL machine" Section 4.3, "Pushing images to the OpenShift Container Platform cluster" 4.1. Setting up the OpenShift Container Platform cluster Complete the following steps on the OpenShift Container Platform cluster to prepare for the mirroring process: Log into the cluster as cluster-admin . Disable the sources for the default catalogs using either the CLI or the OpenShift console: For the CLI, set disableAllDefaultSources: true for OperatorHub: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' For the OpenShift console, navigate to Administration Cluster Settings Configuration OperatorHub . On the OperatorHub page, click the Sources tab, and disable the sources. 4.2. Creating the AMQ Interconnect images on a RHEL machine Complete the following steps on the RHEL machine to prepare for the mirroring process: Prerequisites Access to registry.redhat.io Login to registry.redhat.io from the RHEL machine. USD podman login -u USERNAME -p PASSWORD registry.redhat.io Keep only the Interconnect Operator in the list of operators: USD opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v<openshift-version> -p amq7-interconnect-operator -t <cluster-domain>:<registry-port>/iib:my-operator-iib where <openshift-version> is the version of OpenShift Container Platform, for example, 4.9 . <cluster-domain> is the domain name for the OpenShift Container Platform cluster, for example, mycluster.example.com . <registry-port> is the port number used by the registry in the OpenShift Container Platform cluster, the default being 5000 . Verify that you have only created a podman image of the Interconnect Operator: USD podman images | grep my-operator-iib <cluster-domain>:<registry-port>/iib my-operator-iib 39b6148e6981 3 days ago 138 MB 4.3. Pushing images to the OpenShift Container Platform cluster Prerequisites Access from the RHEL machine to the OpenShift Container Platform cluster. From the RHEL machine, push the image to the cluster registry : USD podman push <cluster-domain>:<registry-port>/iib:my-operator-iib Create the three files required for the mirroring process : USD /usr/local/bin/oc adm catalog mirror \ <cluster-domain>:<registry-port>/iib:my-operator-iib \ <cluster-domain>:<registry-port> \ -a /home/customer-user/.docker/config.json \ --insecure=true \ --registry-config /home/customer-user/.docker/config.json \ --index-filter-by-os=linux/amd64 \ --manifests-only Make sure that the following files exist: catalogSource.yaml - A YAML file describing the catalogSource. imageContentSourcePolicy.yaml - A YAML file that maps the images in the internal registry with the addresses from RedHat registries. mapping.txt - A text file that drives the mirroring process of the images to the internal registry. Edit mapping.txt to list only the images you want to mirror. The file has the following format: Example of a mapping.txt file : Mirror the required images USD /usr/local/bin/oc image mirror \ -f mapping-ic.yaml \ -a /home/customer-user/.docker/config.json \ --insecure=true \ --registry-config /home/customer-user/.docker/config.json \ --keep-manifest-list=true Configure the ImageContentSourcePolicy (ICSP) name: Set the field 'name' in the file imageContentSourcePolicy.yaml, for example, my-operator-icsp Example of a ICSP snippet : --- apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: "true" name: my-operator-icsp spec: repositoryDigestMirrors: - mirrors: - <cluster-domain>:<registry-port>/amq7-amq-interconnect-operator source: registry.redhat.io/amq7/amq-interconnect-operator Apply the policy (ICSP) file : USD /usr/local/bin/oc create -f imageContentSourcePolicy.yaml After you apply this file, all cluster nodes are reset automatically. You can check the nodes status using oc get nodes or in the OpenShift console by navigating to Compute Nodes . Note Make sure all nodes are in Ready state before you continue. Configure the catalogSource name : Set the field name in the catalogSource.yaml file, for example, my-operator-catalog Example of a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: iib namespace: openshift-marketplace spec: image: <cluster-domain>:<registry-port>/iib:my-operator-iib sourceType: grpc Apply the catalog source configuration to complete the installation of the Red Hat Integration - AMQ Interconnect Operator: USD /usr/local/bin/oc apply -f catalogSource.yaml Make sure the installation is working by deploying a router as described in Section 5.1, "Creating an interior router deployment"
[ "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "podman login -u USERNAME -p PASSWORD registry.redhat.io", "opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v<openshift-version> -p amq7-interconnect-operator -t <cluster-domain>:<registry-port>/iib:my-operator-iib", "podman images | grep my-operator-iib <cluster-domain>:<registry-port>/iib my-operator-iib 39b6148e6981 3 days ago 138 MB", "podman push <cluster-domain>:<registry-port>/iib:my-operator-iib", "/usr/local/bin/oc adm catalog mirror <cluster-domain>:<registry-port>/iib:my-operator-iib <cluster-domain>:<registry-port> -a /home/customer-user/.docker/config.json --insecure=true --registry-config /home/customer-user/.docker/config.json --index-filter-by-os=linux/amd64 --manifests-only", "[ Operator address on RedHat registry : Operator SHA ] = [ Operator address on internal mirror registry : tag ]", "registry.redhat.io/amq7/amq-interconnect@sha256:6101cc735e4d19cd67c6d80895c425ecf6f1d2604d88f999fa0cae57a7d6abaf=<cluster-domain>:<registry-port>/amq7-amq-interconnect:f793b0cc registry.redhat.io/amq7/amq-interconnect-operator@sha256:8dd53290c909589590b88a1544d854b4ad9f8b4a639189597c0a59579bc60c40=<cluster-domain>:<registry-port>/amq7-amq-interconnect-operator:73c142ff registry.redhat.io/amq7/amq-interconnect-operator-metadata@sha256:799ce48905d5d2a91b42e2a7943ce9b756aa9da80f6924be06b2a6275ac90214=<cluster-domain>:<registry-port>/amq7-amq-interconnect-operator-metadata:14cc4a4e", "/usr/local/bin/oc image mirror -f mapping-ic.yaml -a /home/customer-user/.docker/config.json --insecure=true --registry-config /home/customer-user/.docker/config.json --keep-manifest-list=true", "--- apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: \"true\" name: my-operator-icsp spec: repositoryDigestMirrors: - mirrors: - <cluster-domain>:<registry-port>/amq7-amq-interconnect-operator source: registry.redhat.io/amq7/amq-interconnect-operator", "/usr/local/bin/oc create -f imageContentSourcePolicy.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: iib namespace: openshift-marketplace spec: image: <cluster-domain>:<registry-port>/iib:my-operator-iib sourceType: grpc", "/usr/local/bin/oc apply -f catalogSource.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/deploying_amq_interconnect_on_openshift/deploying-disconnected-router-ocp
4.3. Tracking the Bind DN for Plug-in Initiated Updates
4.3. Tracking the Bind DN for Plug-in Initiated Updates One change to an entry can trigger other, automatic changes across the directory tree. When a user is deleted, for example, that user is automatically removed from any groups it belonged to by the Referential Integrity Postoperation plug-in. The initial action is shown in the entry as being performed by whatever user account is bound to the server, but all related updates (by default) are shown as being performed by the plug-in, with no information about which user initiated that update. For example, using the MemberOf Plug-in to update user entries with group membership, the update to the group account is shown as being performed by the bound user, while the edit to the user entry is shown as being performed by the MemberOf Plug-in: The nsslapd-plugin-binddn-tracking parameter enables the server to track which user originated an update operation, as well as the internal plug-in which actually performed it. The bound user is shown in the modifiersname and creatorsname operational attributes, while the plug-in which performed it is shown in the internalModifiersname and internalCreatorsname operational attributes. For example: The nsslapd-plugin-binddn-tracking parameter tracks and maintains the relationship between the bound user and any updates performed for that connection. Note The internalModifiersname and internalCreatorsname attributes always show a plug-in as the identity. This plug-in could be an additional plug-in, such as the MemberOf Plug-in. If the change is made by the core Directory Server, then the plug-in is the database plug-in, cn=ldbm database,cn=plugins,cn=config . 4.3.1. Enabling Tracking the Bind DN for Plug-in Initiated Updates Using the Command Line To enable tracking the Bind DN for plug-in-initiated updates using the command line: Set the nsslapd-plugin-binddn-tracking parameter to on : Restart the instance: 4.3.2. Enabling Tracking the Bind DN for Plug-in Initiated Updates Using the Web Console To enable tracking the Bind DN for plug-in-initiated updates using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings menu, and select the Server Settings entry. On the Advanced Settings tab, select Enable Plugin Bind DN Tracking . Click Save . Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" .
[ "dn: cn= example_group ,ou=groups,dc=example,dc=com modifiersname: uid= example ,ou=people,dc=example,dc=com dn: uid= example ,ou=people,dc=example,dc=com modifiersname: cn=memberOf plugin,cn=plugins,cn=config", "dn: uid= example ,ou=people,dc=example,dc=com modifiersname: uid= admin ,ou=people,dc=example,dc=com internalModifiersname: cn=memberOf plugin,cn=plugins,cn=config", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-plugin-binddn-tracking=on", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/tracking_the_bind_dn_for_plug-in-initiated_updates
Chapter 23. Branding a Red Hat Quay deployment on the legacy UI
Chapter 23. Branding a Red Hat Quay deployment on the legacy UI You can brand the UI of your Red Hat Quay deployment by changing the registry title, logo, footer image, and by directing users to a website embedded in the footer image. Procedure Update your Red Hat Quay config.yaml file to add the following parameters: BRANDING: logo: 1 footer_img: 2 footer_url: 3 --- REGISTRY_TITLE: 4 REGISTRY_TITLE_SHORT: 5 1 The URL of the image that will appear at the top of your Red Hat Quay deployment. 2 The URL of the image that will appear at the bottom of your Red Hat Quay deployment. 3 The URL of the website that users will be directed to when clicking the footer image. 4 The long-form title for the registry. This is displayed in frontend of your Red Hat Quay deployment, for example, at the sign in page of your organization. 5 The short-form title for the registry. The title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization's Tutorial page. Restart your Red Hat Quay deployment. After restarting, your Red Hat Quay deployment is updated with a new logo, footer image, and footer image URL.
[ "BRANDING: logo: 1 footer_img: 2 footer_url: 3 --- REGISTRY_TITLE: 4 REGISTRY_TITLE_SHORT: 5" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/branding-quay-deployment
Chapter 1. Disaster Recovery Solutions
Chapter 1. Disaster Recovery Solutions Red Hat Virtualization supports two types of disaster recovery solutions to ensure that environments can recover when a site outage occurs. Both solutions support two sites, and both require replicated storage. Active-Active Disaster Recovery This solution is implemented using a stretch cluster configuration. This means that there is a single RHV environment with a cluster that contains hosts capable of running the required virtual machines in the primary and secondary site. Virtual machines automatically migrate to hosts in the secondary site if an outage occurs. However, the environment must meet latency and networking requirements. See Active-Active Overview for more information. Active-Passive Disaster Recovery Also referred to as site-to-site failover, this disaster recovery solution is implemented by configuring two separate RHV environments: the active primary environment, and the passive secondary (backup) environment. Failover and failback between sites must be manually executed, and is managed by Ansible. See Active-Passive Overview for more information.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/disaster_recovery_guide/disaster_recovery_solutions
probe::vm.kmem_cache_alloc
probe::vm.kmem_cache_alloc Name probe::vm.kmem_cache_alloc - Fires when kmem_cache_alloc is requested Synopsis vm.kmem_cache_alloc Values bytes_alloc allocated Bytes ptr pointer to the kmemory allocated name name of the probe point bytes_req requested Bytes gfp_flags type of kmemory to allocate caller_function name of the caller function. gfp_flag_name type of kmemory to allocate(in string format) call_site address of the function calling this kmemory function.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-vm-kmem-cache-alloc
Chapter 23. Other Configurations
Chapter 23. Other Configurations 23.1. Configuring the kernel on overcloud nodes OpenStack Platform director includes parameters that configure the kernel on overcloud nodes. ExtraKernelModules Kernel modules to load. The modules names are listed as a hash key with an empty value: ExtraKernelPackages Kernel-related packages to install prior to loading the kernel modules from ExtraKernelModules . The package names are listed as a hash key with an empty value. ExtraSysctlSettings Hash of sysctl settings to apply. Set the value of each parameter using the value key. This example shows the syntax of these parameters in an environment file: 23.2. Configuring External Load Balancing An Overcloud uses multiple Controllers together as a high availability cluster, which ensures maximum operational performance for your OpenStack services. In addition, the cluster provides load balancing for access to the OpenStack services, which evenly distributes traffic to the Controller nodes and reduces server overload for each node. It is also possible to use an external load balancer to perform this distribution. For example, an organization might use their own hardware-based load balancer to handle traffic distribution to the Controller nodes. For more information about configuring external load balancing, see the dedicated External Load Balancing for the Overcloud guide for full instructions. 23.3. Configuring IPv6 Networking As a default, the Overcloud uses Internet Protocol version 4 (IPv4) to configure the service endpoints. However, the Overcloud also supports Internet Protocol version 6 (IPv6) endpoints, which is useful for organizations that support IPv6 infrastructure. The director includes a set of environment files to help with creating IPv6-based Overclouds. For more information about configuring IPv6 in the Overcloud, see the dedicated IPv6 Networking for the Overcloud guide for full instructions.
[ "ExtraKernelModules: <MODULE_NAME>: {}", "ExtraKernelPackages: <PACKAGE_NAME>: {}", "ExtraSysctlSettings: <KERNEL_PARAMETER>: value: <VALUE>", "parameter_defaults: ExtraKernelModules: iscsi_target_mod: {} ExtraKernelPackages: iscsi-initiator-utils: {} ExtraSysctlSettings: dev.scsi.logging_level: value: 1" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/other_configurations
Virtualization
Virtualization OpenShift Container Platform 4.9 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[ { \"op\": \"add\", \"path\": \"/spec/configuration/cpuModel\", \"value\": \"<cpu_model>\" 1 } ]'", "ovirt-aaa-jdbc-tool user unlock admin", "The server doesn't have a resource type \"kind: VirtualMachine, apiVersion: kubevirt.io/v1\"", "apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: annotations:", "Memory overhead per infrastructure node ~ 150 MiB", "Memory overhead per worker node ~ 360 MiB", "Memory overhead per virtual machine ~ (1.002 * requested memory) + 146 MiB + 8 MiB * (number of vCPUs) \\ 1 + 16 MiB * (number of graphics devices) 2", "CPU overhead for infrastructure nodes ~ 4 cores", "CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine", "Aggregated storage overhead per node ~ 10 GiB", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.9.7 channel: \"stable\" config: 1", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: 1 workloads: nodePlacement:", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.9.7 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.9.7 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value", "apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.9.7 channel: \"stable\" 1", "oc apply -f <file name>.yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:", "oc apply -f <file_name>.yaml", "watch oc get csv -n openshift-cnv", "NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.9.7 OpenShift Virtualization 4.9.7 Succeeded", "oc get ConsoleCLIDownload virtctl-clidownloads-kubevirt-hyperconverged -o yaml", "tar -xvf <virtctl-version-distribution.arch>.tar.gz", "chmod +x <virtctl-file-name>", "echo USDPATH", "C:\\> path", "echo USDPATH", "yum install kubevirt-virtctl", "subscription-manager repos --enable <repository>", "oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv", "oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv", "oc delete subscription kubevirt-hyperconverged -n openshift-cnv", "CSV_NAME=USD(oc get csv -n openshift-cnv -o=jsonpath=\"{.items[0].metadata.name}\")", "oc delete csv USD{CSV_NAME} -n openshift-cnv", "clusterserviceversion.operators.coreos.com \"kubevirt-hyperconverged-operator.v4.9.7\" deleted", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5", "oc get csv -n openshift-cnv", "VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing", "oc get hco -n openshift-cnv kubevirt-hyperconverged -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'", "ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully", "kubectl get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces", "oc get scc kubevirt-controller -o yaml", "oc get clusterrole kubevirt-controller -o yaml", "virtctl help", "virtctl image-upload -h", "virtctl options", "virtctl guestfs -n <namespace> <pvc_name> 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name> 1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>' 2 chpasswd: { expire: False } name: cloudinitdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: RunStrategy: Always 1 template:", "oc edit <object_type> <object_ID>", "oc apply <object_type> <object_ID>", "oc edit vm example", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "oc get vmis", "oc delete vmi <vmi_name>", "remmina --connect /path/to/console.rdp", "virtctl expose vm <fedora-vm> --port=22 --name=fedora-vm-ssh --type=NodePort 1", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 22:32551/TCP 6s", "ssh username@<node_IP_address> -p 32551", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: namespace: ssh-ns 1 name: vm-ssh spec: running: false template: metadata: labels: kubevirt.io/vm: vm-ssh special: vm-ssh 2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} 3 name: testmasquerade 4 rng: {} machine: type: \"\" resources: requests: memory: 1024M networks: - name: testmasquerade pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #cloud-config user: fedora password: fedora chpasswd: {expire: False}", "oc create -f <path_for_the_VM_YAML_file>", "virtctl start vm-ssh", "apiVersion: v1 kind: Service metadata: name: svc-ssh 1 namespace: ssh-ns 2 spec: ports: - targetPort: 22 3 protocol: TCP port: 27017 selector: special: vm-ssh 4 type: NodePort", "oc create -f <path_for_the_service_YAML_file>", "oc get vmi", "NAME AGE PHASE IP NODENAME vm-ssh 6s Running 10.244.196.152 node01", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-ssh NodePort 10.106.236.208 <none> 27017:30093/TCP 22s", "oc get node <node_name> -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.22.1 192.168.55.101 <none>", "ssh [email protected] -p 30093", "virtctl console <VMI>", "virtctl vnc <VMI>", "virtctl vnc <VMI> -v 4", "oc login -u <user> https://<cluster.example.com>:8443", "oc describe vmi <windows-vmi-name>", "spec: networks: - name: default pod: {} - multus: networkName: cnv-bridge name: bridge-net status: interfaces: - interfaceName: eth0 ipAddress: 198.51.100.0/24 ipAddresses: 198.51.100.0/24 mac: a0:36:9f:0f:b1:70 name: default - interfaceName: eth1 ipAddress: 192.0.2.0/24 ipAddresses: 192.0.2.0/24 2001:db8::/32 mac: 00:17:a4:77:77:25 name: bridge-net", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get vmis", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "oc edit vm <vm-name>", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "oc edit vm <vm-name>", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "kubevirt_vm: namespace: name: cpu_cores: memory: disks: - name: volume: containerDisk: image: disk: bus:", "kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio", "kubevirt_vm: namespace: default name: vm1 state: running 1 cpu_cores: 1", "ansible-playbook create-vm.yaml", "(...) TASK [Create my first VM] ************************************************************************ changed: [localhost] PLAY RECAP ******************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "ansible-playbook create-vm.yaml", "--- - name: Ansible Playbook 1 hosts: localhost connection: local tasks: - name: Create my first VM kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 #", "oc create -f <file_name>.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", \"plugins\": [ { \"type\": \"cnv-bridge\", \"bridge\": \"br1\", \"vlan\": 1 1 }, { \"type\": \"cnv-tuning\" 2 } ] }'", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "kind: VirtualMachine spec: template: domain: resources: requests: memory: 1024M memory: guest: 2048M", "oc create -f <file_name>.yaml", "kind: VirtualMachine spec: template: domain: resources: overcommitGuestOverhead: true requests: memory: 1024M", "oc create -f <file_name>.yaml", "kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2", "oc apply -f <virtual_machine>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "lspci -nnv | grep -i nvidia", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "variant: openshift version: 4.9.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci", "butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml", "oc apply -f 100-worker-vfiopci.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s", "lspci -nnk -d 10de:", "04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1", "lspci -nnk | grep NVIDIA", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1", "oc apply -f <file_name>.yaml", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "pkill -9 watchdog", "yum install watchdog", "#watchdog-device = /dev/watchdog", "systemctl enable --now watchdog.service", "oc get ns", "oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>", "apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> -----END CERTIFICATE-----", "apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3", "oc apply -f endpoint-secret.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi storageClassName: local source: http: 3 url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 4 secretRef: endpoint-secret 5 certConfigMap: \"\" 6 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "dd if=/dev/zero of=<loop10> bs=100M count=20", "losetup </dev/loop10>d3 <loop10> 1 2", "kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4", "oc create -f <local-block-pv10.yaml> 1", "apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3", "oc apply -f endpoint-secret.yaml", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi", "oc create -f import-pv-datavolume.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4", "oc create -f <cloner-datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: \"source-namespace\" name: \"my-favorite-vm-disk\"", "oc create -f <vm-clone-datavolumetemplate>.yaml", "dd if=/dev/zero of=<loop10> bs=100M count=20", "losetup </dev/loop10>d3 <loop10> 1 2", "kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4", "oc create -f <local-block-pv10.yaml> 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5", "oc create -f <cloner-datavolume>.yaml", "kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}", "oc create -f <vm-name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4", "oc create -f example-vm-ipv6.yaml", "oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7", "oc create -f <service_name>.yaml", "oc get service -n example-namespace", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s", "ssh [email protected] -p 27017", "ssh fedora@USDNODE_IP -p 30000", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"<bridge-network>\", 3 \"type\": \"cnv-bridge\", 4 \"bridge\": \"<bridge-interface>\", 5 \"macspoofchk\": true, 6 \"vlan\": 1 7 }'", "oc create -f <network-attachment-definition.yaml> 1", "oc get network-attachment-definition <bridge-network>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3", "oc apply -f <example-vm.yaml>", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12", "oc create -f <name>-sriov-network.yaml", "oc get net-attach-def -n <namespace>", "kind: VirtualMachine spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6", "oc apply -f <vm-sriov.yaml> 1", "kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 2", "kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2", "oc describe vmi <vmi_name>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-", "touch machineconfig.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-set-selinux-for-hostpath-provisioner labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux chcon for hostpath provisioner Before=kubelet.service [Service] ExecStart=/usr/bin/chcon -Rt container_file_t <backing_directory_path> 1 [Install] WantedBy=multi-user.target enabled: true name: hostpath-provisioner.service", "oc create -f machineconfig.yaml -n <namespace>", "sudo chcon -t container_file_t -R <backing_directory_path>", "touch hostpathprovisioner_cr.yaml", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"<backing_directory_path>\" 1 useNamingPrefix: false 2 workload: 3", "oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv", "touch storageclass.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-provisioner 1 provisioner: kubevirt.io/hostpath-provisioner reclaimPolicy: Delete 2 volumeBindingMode: WaitForFirstConsumer 3", "oc create -f storageclass.yaml", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9", "oc edit -n openshift-cnv storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv-ann annotations: v1.multus-cni.io/default-network: bridge-network 1 spec: source: http: url: \"example.exampleurl.com\" pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 pvc: preallocation: true 2", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2", "oc create -f <upload-datavolume>.yaml", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "dd if=/dev/zero of=<loop10> bs=100M count=20", "losetup </dev/loop10>d3 <loop10> 1 2", "kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4", "oc create -f <local-block-pv10.yaml> 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2", "oc create -f <upload-datavolume>.yaml", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2", "oc create -f <my-vmsnapshot>.yaml", "oc wait my-vm my-vmsnapshot --for condition=Ready", "oc describe vmsnapshot <my-vmsnapshot>", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3", "oc create -f <my-vmrestore>.yaml", "oc get vmrestore <my-vmrestore>", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1", "oc delete vmsnapshot <my-vmsnapshot>", "oc get vmsnapshot", "kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem", "oc get pv <destination-pv> -o yaml", "spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2", "oc label pv <destination-pv> node=node01", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: \"<source-vm-disk>\" 2 namespace: \"<source-namespace>\" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5", "oc apply -f <clone-datavolume.yaml>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: \"hostpath\" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi", "oc create -f <blank-image-datavolume>.yaml", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 storage: 4 resources: requests: storage: <2Gi> 5", "oc create -f <cloner-datavolume>.yaml", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1", "oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'", "oc patch pv <pv_name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "oc describe pvc <pvc_name> | grep 'Mounted By:'", "oc delete pvc <pvc_name>", "oc get pv <pv_name> -o yaml > <file_name>.yaml", "oc delete pv <pv_name>", "rm -rf <path_to_share_storage>", "oc create -f <new_pv_name>.yaml", "oc get dvs", "oc delete dv <datavolume_name>", "oc patch -n openshift-cnv cm kubevirt-storage-class-defaults -p '{\"data\":{\"'USD<STORAGE_CLASS>'.accessMode\":\"ReadWriteMany\"}}'", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora", "oc create -f vmi-migrate.yaml", "oc describe vmi vmi-fedora", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "oc delete vmim migration-job", "oc edit vm <custom-vm> -n <my-namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate", "virtctl restart <custom-vm> -n <my-namespace>", "oc adm cordon <node1>", "oc adm drain <node1> --delete-emptydir-data --ignore-daemonsets=true --force", "apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: name: maintenance-example 1 spec: nodeName: node-1.example.com 2 reason: \"Node maintenance\" 3", "oc apply -f nodemaintenance-cr.yaml", "oc describe node <node-name>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotSchedulable 61m kubelet Node node-1.example.com status is now: NodeNotSchedulable", "oc get NodeMaintenance -o yaml", "apiVersion: v1 items: - apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: spec: nodeName: node-1.example.com reason: Node maintenance status: evictionPods: 3 1 pendingPods: - pod-example-workload-0 - httpd - httpd-manual phase: Running lastError: \"Last failure message\" 2 totalpods: 5", "oc adm uncordon <node1>", "oc delete -f nodemaintenance-cr.yaml", "nodemaintenance.nodemaintenance.kubevirt.io \"maintenance-example\" deleted", "\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64", "apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc", "aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave", "aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2", "oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1", "oc get nns", "oc get nns node01 -o yaml", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 4 type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: eth1", "oc apply -f <br1-eth1-policy.yaml> 1", "oc get nncp", "oc get nncp <policy> -o yaml", "oc get nnce", "oc get nnce <node>.<policy> -o yaml", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "oc apply -f <br1-eth1-policy.yaml> 1", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "# interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: port: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10 #", "interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true", "interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false", "interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true", "interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true", "interfaces: dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8", "interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01", "oc apply -f ens01-bridge-testfail.yaml", "nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created", "oc get nncp", "NAME STATUS ens01-bridge-testfail FailedToConfigure", "oc get nnce", "NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure", "oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'", "error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\\n current_state.interfaces[ifname],\\nlibnmstate.error.NmstateVerificationError:", "oc get nns control-plane-1 -o yaml", "- ipv4: name: ens1 state: up type: ethernet", "oc edit nncp ens01-bridge-testfail", "port: - name: ens1", "oc get nncp", "NAME STATUS ens01-bridge-testfail SuccessfullyConfigured", "oc logs <virt-launcher-name>", "oc get events", "oc describe vm <vm>", "oc describe vmi <vmi>", "oc describe pod virt-launcher-<name>", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready", "spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-fedora name: vm-fedora spec: template: metadata: labels: special: vm-fedora spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M readinessProbe: httpGet: port: 1500 initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 180 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-registry-disk-demo - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - setenforce 0 - dnf install -y nmap-ncat - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\\\n\\\\nHello World!' name: cloudinitdisk", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1", "oc adm must-gather --image-stream=openshift/must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v{HCOVersion}", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 -- <environment_variable_1> <environment_variable_2> <script_name>", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 -- NS=mynamespace VM=my-vm gather_vms_details 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 -- PROS=3 gather", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.7 -- gather_images" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/virtualization/index
Chapter 1. About networking
Chapter 1. About networking Red Hat OpenShift Networking is an ecosystem of features, plugins, and advanced networking capabilities that enhance Kubernetes networking with advanced networking-related features that your cluster needs to manage network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, and inter- and intra-cluster traffic management. The Red Hat OpenShift Networking ecosystem also provides role-based observability tooling to reduce its natural complexities. The following are some of the most commonly used Red Hat OpenShift Networking features available on your cluster: Cluster Network Operator for network plugin management Primary cluster network provided by either of the following Container Network Interface (CNI) plugins: OVN-Kubernetes network plugin , which is the default CNI plugin. OpenShift SDN network plugin, which was deprecated in OpenShift 4.16 and removed in OpenShift 4.17. Important Before upgrading OpenShift Dedicated clusters that are configured with the OpenShift SDN network plugin to version 4.17, you must migrate to the OVN-Kubernetes network plugin. For more information, see Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin in the Additional resources section. Additional resources OpenShift SDN CNI removal in OCP 4.17 Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/networking/about-managed-networking
Chapter 1. Features
Chapter 1. Features The features added in this release, and that were not in releases of AMQ Streams, are outlined below. Note To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project . 1.1. Kafka support in AMQ Streams 1.6.x (Long Term Support) This section describes the versions of Kafka and ZooKeeper that are supported in AMQ Streams 1.6 and the subsequent patch releases. AMQ Streams 1.6.x is the Long Term Support release for use with RHEL 7 and 8. For information on support dates for AMQ LTS versions, see the Red Hat Knowledgebase solution How long are AMQ LTS releases supported? . Only Kafka distributions built by Red Hat are supported. versions of Kafka are supported in AMQ Streams 1.6.x only for upgrade purposes. For more information on supported Kafka versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal. 1.1.1. Kafka support in AMQ Streams 1.6.6 and 1.6.7 The AMQ Streams 1.6.6 and 1.6.7 releases support Apache Kafka version 2.6.3. For upgrade instructions, see AMQ Streams and Kafka upgrades . Refer to the Kafka 2.6.3 Release Notes for additional information. Kafka 2.6.3 requires ZooKeeper version 3.5.9. Therefore, you do not need to upgrade ZooKeeper when upgrading from AMQ Streams 1.6.4 / 1.6.5. 1.1.2. Kafka support in AMQ Streams 1.6.4 and 1.6.5 The AMQ Streams 1.6.4 and 1.6.5 releases support and use Apache Kafka version 2.6.2 and ZooKeeper version 3.5.9. For upgrade instructions, see AMQ Streams and Kafka upgrades . Refer to the Kafka 2.6.2 Release Notes for additional information. Kafka 2.6.2 requires ZooKeeper version 3.5.9. Therefore, you need to upgrade ZooKeeper when upgrading from AMQ Streams 1.6.0. 1.1.3. Kafka support in AMQ Streams 1.6.0 AMQ Streams 1.6.0 supports and uses Apache Kafka version 2.6.0. For upgrade instructions, see AMQ Streams and Kafka upgrades . Refer to the Kafka 2.5.0 and Kafka 2.6.0 Release Notes for additional information. Note Kafka 2.5.x is supported in AMQ Streams 1.6.0 only for upgrade purposes. Kafka 2.6.0 requires the same ZooKeeper version as Kafka 2.5.x (ZooKeeper version 3.5.7 / 3.5.8). Therefore, you do not need to upgrade ZooKeeper when upgrading from AMQ Streams 1.5. 1.2. OAuth 2.0 authorization Support for OAuth 2.0 authorization moves out of Technology Preview to a generally available component of AMQ Streams. If you are using OAuth 2.0 for token-based authentication, you can now also use OAuth 2.0 authorization rules to constrain client access to Kafka brokers. AMQ Streams supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services , which allows you to manage security policies and permissions centrally. Security policies and permissions defined in Red Hat Single Sign-On are used to grant access to resources on Kafka brokers. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. See Using OAuth 2.0 token-based authorization . 1.3. Open Policy Agent (OPA) integration Open Policy Agent (OPA) is an open-source policy engine. You can integrate OPA with AMQ Streams to act as a policy-based authorization mechanism for permitting client operations on Kafka brokers. When a request is made from a client, OPA will evaluate the request against policies defined for Kafka access, then allow or deny the request. You can define access control for Kafka clusters, consumer groups and topics. For instance, you can define an authorization policy that allows write access from a producer client to a specific broker topic. See KafkaAuthorizationOpa schema reference Note Red Hat does not support the OPA server. OPA integration is only supported on Open JDK 11.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_rhel/features-str
Chapter 12. Adjusting kernel parameters for database servers
Chapter 12. Adjusting kernel parameters for database servers To ensure efficient operation of database servers and databases, you must configure the required sets of kernel parameters. 12.1. Introduction to database servers A database server is a service that provides features of a database management system (DBMS). DBMS provides utilities for database administration and interacts with end users, applications, and databases. Red Hat Enterprise Linux 8 provides the following database management systems: MariaDB 10.3 MariaDB 10.5 - available since RHEL 8.4 MariaDB 10.11 - available since RHEL 8.10 MySQL 8.0 PostgreSQL 10 PostgreSQL 9.6 PostgreSQL 12 - available since RHEL 8.1.1 PostgreSQL 13 - available since RHEL 8.4 PostgreSQL 15 - available since RHEL 8.8 PostgreSQL 16 - available since RHEL 8.10 12.2. Parameters affecting performance of database applications The following kernel parameters affect performance of database applications. fs.aio-max-nr Defines the maximum number of asynchronous I/O operations the system can handle on the server. Note Raising the fs.aio-max-nr parameter produces no additional changes beyond increasing the aio limit. fs.file-max Defines the maximum number of file handles (temporary file names or IDs assigned to open files) the system supports at any instance. The kernel dynamically allocates file handles whenever a file handle is requested by an application. However, the kernel does not free these file handles when they are released by the application. It recycles these file handles instead. The total number of allocated file handles will increase over time even though the number of currently used file handles might be low. kernel.shmall Defines the total number of shared memory pages that can be used system-wide. To use the entire main memory, the value of the kernel.shmall parameter should be <= total main memory size. kernel.shmmax Defines the maximum size in bytes of a single shared memory segment that a Linux process can allocate in its virtual address space. kernel.shmmni Defines the maximum number of shared memory segments the database server is able to handle. net.ipv4.ip_local_port_range The system uses this port range for programs that connect to a database server without specifying a port number. net.core.rmem_default Defines the default receive socket memory through Transmission Control Protocol (TCP). net.core.rmem_max Defines the maximum receive socket memory through Transmission Control Protocol (TCP). net.core.wmem_default Defines the default send socket memory through Transmission Control Protocol (TCP). net.core.wmem_max Defines the maximum send socket memory through Transmission Control Protocol (TCP). vm.dirty_bytes / vm.dirty_ratio Defines a threshold in bytes / in percentage of dirty-able memory at which a process generating dirty data is started in the write() function. Note Either vm.dirty_bytes or vm.dirty_ratio can be specified at a time. vm.dirty_background_bytes / vm.dirty_background_ratio Defines a threshold in bytes / in percentage of dirty-able memory at which the kernel tries to actively write dirty data to hard-disk. Note Either vm.dirty_background_bytes or vm.dirty_background_ratio can be specified at a time. vm.dirty_writeback_centisecs Defines a time interval between periodic wake-ups of the kernel threads responsible for writing dirty data to hard-disk. This kernel parameters measures in 100th's of a second. vm.dirty_expire_centisecs Defines the time of dirty data that becomes old to be written to hard-disk. This kernel parameters measures in 100th's of a second. Additional resources Dirty pagecache writeback and vm.dirty parameters
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/adjusting-kernel-parameters-for-database-servers_managing-monitoring-and-updating-the-kernel
Chapter 3. Project storage and build options with Red Hat Process Automation Manager
Chapter 3. Project storage and build options with Red Hat Process Automation Manager As you develop a Red Hat Process Automation Manager project, you need to be able to track the versions of your project with a version-controlled repository, manage your project assets in a stable environment, and build your project for testing and deployment. You can use Business Central for all of these tasks, or use a combination of Business Central and external tools and repositories. Red Hat Process Automation Manager supports Git repositories for project version control, Apache Maven for project management, and a variety of Maven-based, Java-based, or custom-tool-based build options. The following options are the main methods for Red Hat Process Automation Manager project versioning, storage, and building: Table 3.1. Project version control options (Git) Versioning option Description Documentation Business Central Git VFS Business Central contains a built-in Git Virtual File System (VFS) that stores all processes, rules, and other artifacts that you create in the authoring environment. Git is a distributed version control system that implements revisions as commit objects. When you commit your changes into a repository, a new commit object in the Git repository is created. When you create a project in Business Central, the project is added to the Git repository connected to Business Central. NA External Git repository If you have Red Hat Process Automation Manager projects in Git repositories outside of Business Central, you can import them into Red Hat Process Automation Manager spaces and use Git hooks to synchronize the internal and external Git repositories. Managing projects in Business Central Table 3.2. Project management options (Maven) Management option Description Documentation Business Central Maven repository Business Central contains a built-in Maven repository that organizes and builds project assets that you create in the authoring environment. Maven is a distributed build-automation tool that uses repositories to store Java libraries, plug-ins, and other build artifacts. When building projects and archetypes, Maven dynamically retrieves Java libraries and Maven plug-ins from local or remote repositories to promote shared dependencies across projects. Note For a production environment, consider using an external Maven repository configured with Business Central. NA External Maven repository If you have Red Hat Process Automation Manager projects in an external Maven repository, such as Nexus or Artifactory, you can create a settings.xml file with connection details and add that file path to the kie.maven.settings.custom property in your project standalone-full.xml file. Maven Settings Reference Packaging and deploying an Red Hat Process Automation Manager project Table 3.3. Project build options Build option Description Documentation Business Central (KJAR) Business Central builds Red Hat Process Automation Manager projects stored in either the built-in Maven repository or a configured external Maven repository. Projects in Business Central are packaged automatically as knowledge JAR (KJAR) files with all components needed for deployment when you build the projects. Packaging and deploying an Red Hat Process Automation Manager project Standalone Maven project (KJAR) If you have a standalone Red Hat Process Automation Manager Maven project outside of Business Central, you can edit the project pom.xml file to package your project as a KJAR file, and then add a kmodule.xml file with the KIE base and KIE session configurations needed to build the project. Packaging and deploying an Red Hat Process Automation Manager project Embedded Java application (KJAR) If you have an embedded Java application from which you want to build your Red Hat Process Automation Manager project, you can use a KieModuleModel instance to programmatically create a kmodule.xml file with the KIE base and KIE session configurations, and then add all resources in your project to the KIE virtual file system KieFileSystem to build the project. Packaging and deploying an Red Hat Process Automation Manager project CI/CD tool (KJAR) If you use a tool for continuous integration and continuous delivery (CI/CD), you can configure the tool set to integrate with your Red Hat Process Automation Manager Git repositories to build a specified project. Ensure that your projects are packaged and built as KJAR files to ensure optimal deployment. NA S2I in OpenShift (container image) If you use Red Hat Process Automation Manager on Red Hat OpenShift Container Platform, you can build your Red Hat Process Automation Manager projects as KJAR files in the typical way or use Source-to-Image (S2I) to build your projects as container images. S2I is a framework and a tool that allows you to write images that use the application source code as an input and produce a new image that runs the assembled application as an output. The main advantage of using the S2I tool for building reproducible container images is the ease of use for developers. The Red Hat Process Automation Manager images build the KJAR files as S2I automatically, using the source from a Git repository that you can specify. You do not need to create scripts or manage an S2I build. For the S2I concept: Images in the Red Hat OpenShift Container Platform product documentation. For the operator-based deployment process: Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators . In the KIE Server settings, add a KIE Server instance and then click Set Immutable server configuration to configure the source Git repository for an S2I deployment.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_process_automation_manager/project-storage-version-build-options-ref_decision-management-architecture
Chapter 3. Active-Passive Disaster Recovery
Chapter 3. Active-Passive Disaster Recovery This chapter provides instructions to configure Red Hat Virtualization for disaster recovery using the active-passive disaster recovery solution. 3.1. Active-Passive Overview Red Hat Virtualization supports an active-passive disaster recovery solution that can span two sites. If the primary site becomes unavailable, the Red Hat Virtualization environment can be forced to fail over to the secondary (backup) site. The failover is achieved by configuring a Red Hat Virtualization environment in the secondary site, which requires: An active Red Hat Virtualization Manager. A data center and clusters. Networks with the same general connectivity as the primary site. Active hosts capable of running critical virtual machines after failover. Important You must ensure that the secondary environment has enough resources to run the failed over virtual machines, and that both the primary and secondary environments have identical Manager versions, data center and cluster compatibility levels, and PostgreSQL versions. The minimum supported compatibility level is 4.2. Storage domains that contain virtual machine disks and templates in the primary site must be replicated. These replicated storage domains must not be attached to the secondary site. The failover and failback process must be executed manually. To do this you need to create Ansible playbooks to map entities between the sites, and to manage the failover and failback processes. The mapping file instructs the Red Hat Virtualization components where to fail over or fail back to on the target site. The following diagram describes an active-passive setup where the machine running Red Hat Ansible Engine is highly available, and has access to the oVirt.disaster-recovery Ansible role, configured playbooks, and mapping file. The storage domains that store the virtual machine disks in Site A is replicated. Site B has no virtual machines or attached storage domains. Figure 3.1. Active-Passive Configuration When the environment fails over to Site B, the storage domains are first attached and activated in Site B's data center, and then the virtual machines are registered. Highly available virtual machines will fail over first. Figure 3.2. Failover to Backup Site You will need to manually fail back to the primary site (Site A) when it is running again. 3.1.1. Network Considerations You must ensure that the same general connectivity exists in the primary and secondary sites. If you have multiple networks or multiple data centers then you must use an empty network mapping in the mapping file to ensure that all entities register on the target during failover. See Appendix A, Mapping File Attributes for more information. 3.1.2. Storage Considerations The storage domain for Red Hat Virtualization can be made of either block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems). For more information about Red Hat Virtualization storage see Storage in the Administration Guide . Important Local storage domains are unsupported for disaster recovery. A primary and secondary storage replica is required. The primary storage domain's block devices or shares that contain virtual machine disks or templates must be replicated. The secondary storage must not be attached to any data center, and will be added to the backup site's data center during failover. If you are implementing disaster recovery using self-hosted engine, ensure that the storage domain used by the self-hosted engine Manager virtual machine does not contain virtual machine disks because the storage domain will not be failed over. All storage solutions that have replication options that are supported by Red Hat Enterprise Linux 7 and later can be used. 3.2. Create the Required Ansible Playbooks Ansible is used to initiate and manage the disaster recovery failover and failback. You therefore need to create Ansible playbooks to facilitate this. For more information about creating Ansible playbooks, see the Ansible documentation . Prerequisites : Fully functioning Red Hat Virtualization environment in the primary site. A backup environment in the secondary site with the same data center and cluster compatibility level as the primary environment. The backup environment must have: A Red Hat Virtualization Manager. Active hosts capable of running the virtual machines and connecting to the replicated storage domains. A data center with clusters. Networks with the same general connectivity as the primary site. Replicated storage. See Section 3.1.2, "Storage Considerations" for more information. Note The replicated storage that contains virtual machines and templates must not be attached to the secondary site. The oVirt.disaster-recovery package must be installed on the highly available Red Hat Ansible Engine machine that will automate the failover and failback. The machine running Red Hat Ansible Engine must be able to use SSH to connect to the Manager in the primary and secondary site. It is also recommended to create environment properties that exist in the primary site, such as affinity groups, affinity labels, users, on the secondary site. Note The default behaviour of the Ansible playbooks can be configured in the /usr/share/ansible/roles/oVirt.disaster-recovery/defaults/main.yml file. The following playbooks must be created: The playbook that creates the file to map entities on the primary and secondary site. The failover playbook. The failback playbook. You can also create an optional playbook to clean the primary site before failing back. Create the playbooks and associated files in /usr/share/ansible/roles/oVirt.disaster-recovery/ on the Ansible machine that is managing the failover and failback. If you have multiple Ansible machines that can manage it, ensure that you copy the files to all of them. You can test the configuration using one or more of the testing procedures in Appendix B, Testing the Active-Passive Configuration . 3.2.1. Using the ovirt-dr Script for Ansible Tasks The ovirt-dr script is located in /usr/share/ansible/roles/oVirt.disaster-recovery/files . This script simplifies the following Ansible tasks: Generating a var mapping file of the primary and secondary sites for failover and fallback Validating the var mapping file Executing failover on a target site Executing failback from a target site to a source site Usage You can set the parameters for the script's actions in the configuration file, /usr/share/ansible/roles/oVirt.disaster-recovery/files/dr.conf . You can change the location of the configuration file with the --conf-file option. You can set the location and level of logging detail with the --log-file and --log-level options. 3.2.2. Create the Playbook to Generate the Mapping File The Ansible playbook used to generate the mapping file will prepopulate the file with the target (primary) site's entities. You then need to manually add the backup site's entities, such as IP addresses, cluster, affinity groups, affinity label, external LUN disks, authorization domains, roles, and vNIC profiles, to the file. Important The mapping file generation will fail if you have any virtual machine disks on the self-hosted engine's storage domain. Also, the mapping file will not contain an attribute for this storage domain because it must not be failed over. In this example the Ansible playbook is named dr-rhv-setup.yml , and is executed on the Manager machine in the primary site. Creating the mapping file : Create an Ansible playbook to generate the mapping file. For example: --- - name: Generate mapping hosts: localhost connection: local vars: site: https://example.engine.redhat.com/ovirt-engine/api username: admin@internal password: my_password ca: /etc/pki/ovirt-engine/ca.pem var_file: disaster_recovery_vars.yml roles: - oVirt.disaster-recovery Note For extra security you can encrypt your Manager password in a .yml file. See Using Ansible Roles to Configure Red Hat Virtualization in the Administration Guide for more information. Run the Ansible command to generate the mapping file. The primary site's configuration will be prepopulated. Configure the mapping file ( disaster_recovery_vars.yml in this case) with the backup site's configuration. See Appendix A, Mapping File Attributes for more information about the mapping file's attributes. If you have multiple Ansible machines that can perform the failover and failback, then copy the mapping file to all relevant machines. 3.2.3. Create the Failover and Failback Playbooks Ensure that you have the mapping file that you created and configured, in this case disaster_recovery_vars.yml , because this must be added to the playbooks. You can define a password file (for example passwords.yml ) to store the Manager passwords of the primary and secondary site. For example: Note For extra security you can encrypt the password file. However, you will need to use the --ask-vault-pass parameter when running the playbook. See Using Ansible Roles to Configure Red Hat Virtualization in the Administration Guide for more information. In these examples the Ansible playbooks to fail over and fail back are named dr-rhv-failover.yml and dr-rhv-failback.yml . Create the following Ansible playbook to failover the environment: Create the following Ansible playbook to failback the environment: 3.2.4. Create the Playbook to Clean the Primary Site Before you failback to the primary site, you need to ensure that the primary site is cleaned of all storage domains to be imported. This can be performed manually on the Manager, or optionally you can create an Ansible playbook to perform it for you. The Ansible playbook to clean the primary site is named dr-cleanup.yml in this example, and it uses the mapping file created in Section 3.2.2, "Create the Playbook to Generate the Mapping File" : 3.3. Execute a Failover Prerequisites : The Manager and hosts in the secondary site are running. Replicated storage domains are in read/write mode. No replicated storage domains are attached to the secondary site. A machine running Red Hat Ansible Engine that can connect via SSH to the Manager in the primary and secondary site, with the required packages and files: The oVirt.disaster-recovery package. The mapping file and required failover playbook. Important Sanlock must release all storage locks from the replicated storage domains before the failover process starts. These locks should be released automatically approximately 80 seconds after the disaster occurs. This example uses the dr-rhv-failover.yml playbook created earlier. Executing a failover : Run the failover playbook with the following command: When the primary site becomes active, ensure that you clean the environment before failing back. See Section 3.4, "Clean the Primary Site" for more information. 3.4. Clean the Primary Site After you fail over, you must clean the environment in the primary site before failing back to it: Reboot all hosts in the primary site. Ensure the secondary site's storage domains are in read/write mode and the primary site's storage domains are in read only mode. Synchronize the replication from the secondary site's storage domains to the primary site's storage domains. Clean the primary site of all storage domains to be imported. This can be done manually in the Manager, or by creating and running an Ansible playbook. See Detaching a Storage Domain in the Administration Guide for manual instructions, or Section 3.2.4, "Create the Playbook to Clean the Primary Site" for information to create the Ansible playbook. This example uses the dr-cleanup.yml playbook created earlier to clean the environment. Cleaning the primary site: Clean up the primary site with the following command: You can now failback the environment to the primary site. See Section 3.5, "Execute a Failback" for more information. 3.5. Execute a Failback Once you fail over, you can fail back to the primary site when it is active and you have performed the necessary steps to clean the environment. Prerequisites : The environment in the primary site is running and has been cleaned, see Section 3.4, "Clean the Primary Site" for more information. The environment in the secondary site is running, and has active storage domains. A machine running Red Hat Ansible Engine that can connect via SSH to the Manager in the primary and secondary site, with the required packages and files: The oVirt.disaster-recovery package. The mapping file and required failback playbook. This example uses the dr-rhv-failback.yml playbook created earlier. Executing a failback : Run the failback playbook with the following command: Enable replication from the primary storage domains to the secondary storage domains.
[ "./ovirt-dr generate/validate/failover/failback [--conf-file=dr.conf] [--log-file=ovirt-dr- log_number .log] [--log-level= DEBUG/INFO/WARNING/ERROR ]", "--- - name: Generate mapping hosts: localhost connection: local vars: site: https://example.engine.redhat.com/ovirt-engine/api username: admin@internal password: my_password ca: /etc/pki/ovirt-engine/ca.pem var_file: disaster_recovery_vars.yml roles: - oVirt.disaster-recovery", "ansible-playbook dr-rhv-setup.yml --tags \"generate_mapping\"", "--- This file is in plain text, if you want to encrypt this file, please execute following command: # USD ansible-vault encrypt passwords.yml # It will ask you for a password, which you must then pass to ansible interactively when executing the playbook. # USD ansible-playbook myplaybook.yml --ask-vault-pass # dr_sites_primary_password: primary_password dr_sites_secondary_password: secondary_password", "--- - name: Failover RHV hosts: localhost connection: local vars: dr_target_host: secondary dr_source_map: primary vars_files: - disaster_recovery_vars.yml - passwords.yml roles: - oVirt.disaster-recovery", "--- - name: Failback RHV hosts: localhost connection: local vars: dr_target_host: primary dr_source_map: secondary vars_files: - disaster_recovery_vars.yml - passwords.yml roles: - oVirt.disaster-recovery", "--- - name: clean RHV hosts: localhost connection: local vars: dr_source_map: primary vars_files: - disaster_recovery_vars.yml roles: - oVirt.disaster-recovery", "ansible-playbook dr-rhv-failover.yml --tags \"fail_over\"", "ansible-playbook dr-cleanup.yml --tags \"clean_engine\"", "ansible-playbook dr-rhv-failback.yml --tags \"fail_back\"" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/disaster_recovery_guide/active_passive
3.5. Setting Cgroup Parameters
3.5. Setting Cgroup Parameters Modify the parameters of the control groups by editing the /etc/cgconfig.conf configuration file, or by using the cgset command. Changes made to /etc/cgconfig.conf are preserved after reboot, while cgset changes the cgroup parameters only for the current session. Modifying /etc/cgconfig.conf You can set the controller parameters in the Groups section of /etc/cgconfig.conf . Group entries are defined using the following syntax: Replace name with the name of your cgroup, controller stands for the name of the controller you wish to modify. You should modify only controllers you mounted yourself, not any of the default controllers mounted automatically by systemd . Replace param_name and param_value with the controller parameter you wish to change and its new value. Note that the permissions section is optional. To define permissions for a group entry, use the following syntax: Note Restart the cgconfig service for the changes in the /etc/cgconfig.conf to take effect. Restarting this service rebuilds hierarchies specified in the configuration file but does not affect all mounted hierarchies. You can restart a service by executing the systemctl restart command, however, it is recommended to first stop the cgconfig service: Then open and edit the configuration file. After saving your changes, you can start cgconfig again with the following command: Using the cgset Command Set controller parameters by running the cgset command from a user account with permission to modify the relevant cgroup. Use this only for controllers you mounted manually. The syntax for cgset is: where: parameter is the parameter to be set, which corresponds to the file in the directory of the given cgroup; value is the value for the parameter; path_to_cgroup is the path to the cgroup relative to the root of the hierarchy . The values that can be set with cgset might depend on values set higher in a particular hierarchy. For example, if group1 is limited to use only CPU 0 on a system, you cannot set group1/subgroup1 to use CPUs 0 and 1, or to use only CPU 1. It is also possible use cgset to copy the parameters of one cgroup into another, existing cgroup. The syntax to copy parameters with cgset is: where: path_to_source_cgroup is the path to the cgroup whose parameters are to be copied, relative to the root group of the hierarchy; path_to_target_cgroup is the path to the destination cgroup, relative to the root group of the hierarchy.
[ "group name { [ permissions ] controller { param_name = param_value ; ... } ... }", "perm { task { uid = task_user ; gid = task_group ; } admin { uid = admin_name ; gid = admin_group ; } }", "~]# systemctl stop cgconfig", "~]# systemctl start cgconfig", "cgset -r parameter = value path_to_cgroup", "cgset --copy-from path_to_source_cgroup path_to_target_cgroup" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/sec-Setting_Cgroup_Parameters
Chapter 2. Ceph Dashboard installation and access
Chapter 2. Ceph Dashboard installation and access As a system administrator, you can access the dashboard with the credentials provided on bootstrapping the cluster. Cephadm installs the dashboard by default. Following is an example of the dashboard URL: Note Update the browser and clear the cookies prior to accessing the dashboard URL. The following are the Cephadm bootstrap options that are available for the Ceph dashboard configurations: [-initial-dashboard-user INITIAL_DASHBOARD_USER ] - Use this option while bootstrapping to set initial-dashboard-user. [-initial-dashboard-password INITIAL_DASHBOARD_PASSWORD ] - Use this option while bootstrapping to set initial-dashboard-password. [-ssl-dashboard-port SSL_DASHBOARD_PORT ] - Use this option while bootstrapping to set custom dashboard port other than default 8443. [-dashboard-key DASHBOARD_KEY ] - Use this option while bootstrapping to set Custom key for SSL. [-dashboard-crt DASHBOARD_CRT ] - Use this option while bootstrapping to set Custom certificate for SSL. [-skip-dashboard] - Use this option while bootstrapping to deploy Ceph without dashboard. [-dashboard-password-noupdate] - Use this option while bootstrapping if you used above two options and don't want to reset password at the first time login. [-allow-fqdn-hostname] - Use this option while bootstrapping to allow hostname that is fully-qualified. [-skip-prepare-host] - Use this option while bootstrapping to skip preparing the host. Note To avoid connectivity issues with dashboard related external URL, use the fully qualified domain names (FQDN) for hostnames, for example, host01.ceph.redhat.com . Note Open the Grafana URL directly in the client internet browser and accept the security exception to see the graphs on the Ceph dashboard. Reload the browser to view the changes. Example Note While boostrapping the storage cluster using cephadm , you can use the --image option for either custom container images or local container images. Note You have to change the password the first time you log into the dashboard with the credentials provided on bootstrapping only if --dashboard-password-noupdate option is not used while bootstrapping. You can find the Ceph dashboard credentials in the var/log/ceph/cephadm.log file. Search with the "Ceph Dashboard is now available at" string. This section covers the following tasks: Network port requirements for Ceph dashboard . Accessing the Ceph dashboard . Expanding the cluster on the Ceph dashboard . Toggling Ceph dashboard features . Understanding the landing page of the Ceph dashboard . Enabling Red Hat Ceph Storage Dashboard manually . Changing the dashboard password using the Ceph dashboard . Changing the Ceph dashboard password using the command line interface . Setting admin user password for Grafana . Creating an admin account for syncing users to the Ceph dashboard . Syncing users to the Ceph dashboard using the Red Hat Single Sign-On . Enabling single sign-on for the Ceph dashboard . Disabling single sign-on for the Ceph dashboard . 2.1. Network port requirements for Ceph Dashboard The Ceph dashboard components use certain TCP network ports which must be accessible. By default, the network ports are automatically opened in firewalld during installation of Red Hat Ceph Storage. Table 2.1. TCP Port Requirements Port Use Originating Host Destination Host 8443 The dashboard web interface IP addresses that need access to Ceph Dashboard UI and the host under Grafana server, since the AlertManager service can also initiate connections to the Dashboard for reporting alerts. The Ceph Manager hosts. 3000 Grafana IP addresses that need access to Grafana Dashboard UI and all Ceph Manager hosts and Grafana server. The host or hosts running Grafana server. 2049 NFS-Ganesha IP addresses that need access to NFS. The IP addresses that provide NFS services. 9095 Default Prometheus server for basic Prometheus graphs IP addresses that need access to Prometheus UI and all Ceph Manager hosts and Grafana server or Hosts running Prometheus. The host or hosts running Prometheus. 9093 Prometheus Alertmanager IP addresses that need access to Alertmanager Web UI and all Ceph Manager hosts and Grafana server or Hosts running Prometheus. All Ceph Manager hosts and the host under Grafana server. 9094 Prometheus Alertmanager for configuring a highly available cluster made from multiple instances All Ceph Manager hosts and the host under Grafana server. Prometheus Alertmanager High Availability (peer daemon sync), so both src and dst should be hosts running Prometheus Alertmanager. 9100 The Prometheus node-exporter daemon Hosts running Prometheus that need to view Node Exporter metrics Web UI and All Ceph Manager hosts and Grafana server or Hosts running Prometheus. All storage cluster hosts, including MONs, OSDS, Grafana server host. 9283 Ceph Manager Prometheus exporter module Hosts running Prometheus that need access to Ceph Exporter metrics Web UI and Grafana server. All Ceph Manager hosts. Additional Resources For more information, see the Red Hat Ceph Storage Installation Guide . For more information, see Using and configuring firewalls in Configuring and managing networking . 2.2. Accessing the Ceph dashboard You can access the Ceph dashboard to administer and monitor your Red Hat Ceph Storage cluster. Prerequisites Successful installation of Red Hat Ceph Storage Dashboard. NTP is synchronizing clocks properly. Procedure Enter the following URL in a web browser: Syntax Replace: HOST_NAME with the fully qualified domain name (FQDN) of the active manager host. PORT with port 8443 Example You can also get the URL of the dashboard by running the following command in the Cephadm shell: Example This command will show you all endpoints that are currently configured. Look for the dashboard key to obtain the URL for accessing the dashboard. On the login page, enter the username admin and the default password provided during bootstrapping. You have to change the password the first time you log in to the Red Hat Ceph Storage dashboard. After logging in, the dashboard default landing page is displayed, which provides details, a high-level overview of status, performance, inventory, and capacity metrics of the Red Hat Ceph Storage cluster. Figure 2.1. Ceph dashboard landing page Click the menu icon ( ) on the dashboard landing page to collapse or display the options in the vertical menu. Additional Resources For more information, see Changing the dashboard password using the Ceph dashboard in the Red Hat Ceph Storage Dashboard guide . 2.3. Expanding the cluster on the Ceph dashboard You can use the dashboard to expand the Red Hat Ceph Storage cluster for adding hosts, adding OSDs, and creating services such as Alertmanager, Cephadm-exporter, CephFS-mirror, Grafana, ingress, MDS, NFS, node-exporter, Prometheus, RBD-mirror, and Ceph Object Gateway. Once you bootstrap a new storage cluster, the Ceph Monitor and Ceph Manager daemons are created and the cluster is in HEALTH_WARN state. After creating all the services for the cluster on the dashboard, the health of the cluster changes from HEALTH_WARN to HEALTH_OK status. Prerequisites Bootstrapped storage cluster. See Bootstrapping a new storage cluster section in the Red Hat Ceph Storage Installation Guide for more details. At least cluster-manager role for the user on the Red Hat Ceph Storage Dashboard. See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. Procedure Copy the admin key from the bootstrapped host to other hosts: Syntax Example Log in to the dashboard with the default credentials provided during bootstrap. Change the password and log in to the dashboard with the new password . On the landing page, click Expand Cluster . Figure 2.2. Expand cluster Add hosts: In the Add Hosts window, click +Add . Provide the hostname. This is same as the hostname that was provided while copying the key from the bootstrapped host. Note You can use the tool tip in the Add Hosts dialog box for more details. Optional: Provide the respective IP address of the host. Optional: Select the labels for the hosts on which the services are going to be created. Click Add Host . Follow the above steps for all the hosts in the storage cluster. In the Add Hosts window, click . Create OSDs: In the Create OSDs window, for Primary devices, Click +Add . In the Primary Devices window, filter for the device and select the device. Click Add . Optional: In the Create OSDs window, if you have any shared devices such as WAL or DB devices, then add the devices. Optional: Click on the check-box Encryption to encrypt the features. In the Create OSDs window, click . Create services: In the Create Services window, click +Create . In the Create Service dialog box, Select the type of the service from the drop-down. Provide the service ID, a unique name of the service. Provide the placement by hosts or label. Select the hosts. Provide the number of daemons or services that need to be deployed. Click Create Service . In the Create Service window, Click . Review the Cluster Resources , Hosts by Services , Host Details . If you want to edit any parameter, click Back and follow the above steps. Figure 2.3. Review cluster Click Expand Cluster . You get a notification that the cluster expansion was successful. The cluster health changes to HEALTH_OK status on the dashboard. Verification Log in to the cephadm shell: Example Run the ceph -s command. Example The health of the cluster is HEALTH_OK . Additional Resources See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. See the Red Hat Ceph Storage Installation Guide for more details. 2.4. Toggling Ceph dashboard features You can customize the Red Hat Ceph Storage dashboard components by enabling or disabling features on demand. All features are enabled by default. When disabling a feature, the web-interface elements become hidden and the associated REST API end-points reject any further requests for that feature. Enabling and disabling dashboard features can be done from the command-line interface or the web interface. Available features: Ceph Block Devices: Image management, rbd Mirroring, mirroring Ceph Filesystem, cephfs Ceph Object Gateway, rgw NFS Ganesha gateway, nfs Note By default, the Ceph Manager is collocated with the Ceph Monitor. Note You can disable multiple features at once. Important Once a feature is disabled, it can take up to 20 seconds to reflect the change in the web interface. Prerequisites Installation and configuration of the Red Hat Ceph Storage dashboard software. User access to the Ceph Manager host or the dashboard web interface. Root level access to the Ceph Manager host. Procedure To toggle the dashboard features from the dashboard web interface: On the dashboard landing page, navigate to Cluster drop-down menu. Select Manager Modules , and then select Dashboard . In the Edit Manager module page, you can enable or disable the dashboard features by checking or unchecking the selection box to the feature name. Once the selections have been made, scroll down and click Update . To toggle the dashboard features from the command-line interface: Log in to the Cephadm shell: Example List the feature status: Example Disable a feature: This example disables the Ceph Object Gateway feature. Enable a feature: This example enables the Ceph Filesystem feature. 2.5. Understanding the landing page of the Ceph dashboard The landing page displays an overview of the entire Ceph cluster using navigation bars and individual panels. The navigation bar provides the following options: Messages about tasks and notifications. Link to the documentation, Ceph Rest API, and details about the Red Hat Ceph Storage Dashboard. Link to user management and telemetry configuration. Link to change password and sign out of the dashboard. Figure 2.4. Navigation bar Apart from that, the individual panel displays specific information about the state of the cluster. Categories The landing page organizes panels into the following three categories: Status Capacity Performance Figure 2.5. Ceph dashboard landing page Status panel The status panels display the health of the cluster and host and daemon states. Cluster Status : Displays the current health status of the Ceph storage cluster. Hosts : Displays the total number of hosts in the Ceph storage cluster. Monitors : Displays the number of Ceph Monitors and the quorum status. OSDs : Displays the total number of OSDs in the Ceph Storage cluster and the number that are up , and in . Managers : Displays the number and status of the Manager Daemons. Object Gateways : Displays the number of Object Gateways in the Ceph storage cluster. Metadata Servers : Displays the number and status of metadata servers for Ceph Filesystems (CephFS). Capacity panel The capacity panel displays storage usage metrics. Raw Capacity : Displays the utilization and availability of the raw storage capacity of the cluster. Objects : Displays the total number of objects in the pools and a graph dividing objects into states of Healthy , Misplaced , Degraded , or Unfound . PG Status : Displays the total number of Placement Groups and a graph dividing PGs into states of Clean , Working , Warning , or Unknown . To simplify display of PG states Working and Warning actually each encompass multiple states. The Working state includes PGs with any of these states: activating backfill_wait backfilling creating deep degraded forced_backfill forced_recovery peering peered recovering recovery_wait repair scrubbing snaptrim snaptrim_wait The Warning state includes PGs with any of these states: backfill_toofull backfill_unfound down incomplete inconsistent recovery_toofull recovery_unfound remapped snaptrim_error stale undersized Pools : Displays the number of storage pools in the Ceph cluster. PGs per OSD : Displays the number of placement groups per OSD. Performance panel The performance panel display information related to data transfer speeds. Client Read/Write : Displays total input/output operations per second, reads per second, and writes per second. Client Throughput : Displays total client throughput, read throughput, and write throughput. Recovery Throughput : Displays the rate of cluster healing and balancing operations. For example, the status of any background data that may be moving due to a loss of disk is displayed. Scrubbing : Displays whether Ceph is scrubbing data to verify its integrity. Additional Resources For more information, see Monitoring the cluster on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. 2.6. Changing the dashboard password using the Ceph dashboard By default, the password for accessing dashboard is randomly generated by the system while bootstrapping the cluster. You have to change the password the first time you log in to the Red Hat Ceph Storage dashboard. You can change the password for the admin user using the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Log in to the dashboard: Click the Dashboard Settings icon and then click User management . Figure 2.6. User management To change the password of admin , click it's row. From the Edit drop-down menu, select Edit . In the Edit User window, enter the new password, and change the other parameters, and then Click Edit User . Figure 2.7. Edit user management You will be logged out and redirected to the log-in screen. A notification appears confirming the password change. 2.7. Changing the Ceph dashboard password using the command line interface If you have forgotten your Ceph dashboard password, you can change the password using the command line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the host on which the dashboard is installed. Procedure Log into the Cephadm shell: Example Create the dashboard_password.yml file: Example Edit the file and add the new dashboard password: Example Reset the dashboard password: Syntax Example Verification Log in to the dashboard with your new password. 2.8. Setting admin user password for Grafana By default, cephadm does not create an admin user for Grafana. With the Ceph Orchestrator, you can create an admin user and set the password. With these credentials, you can log in to the storage cluster's Grafana URL with the given password for the admin user. Prerequisites A running Red Hat Ceph Storage cluster with the monitoring stack installed. Root-level access to the cephadm host. The dashboard module enabled. Procedure As a root user, create a grafana.yml file and provide the following details: Syntax Example Mount the grafana.yml file under a directory in the container: Example Note Every time you exit the shell, you have to mount the file in the container before deploying the daemon. Optional: Check if the dashboard Ceph Manager module is enabled: Example Optional: Enable the dashboard Ceph Manager module: Example Apply the specification using the orch command: Syntax Example Redeploy grafana service: Example This creates an admin user called admin with the given password and the user can log in to the Grafana URL with these credentials. Verification: Log in to Grafana with the credentials: Syntax Example 2.9. Enabling Red Hat Ceph Storage Dashboard manually If you have installed a Red Hat Ceph Storage cluster by using --skip-dashboard option during bootstrap, you can see that the dashboard URL and credentials are not available in the bootstrap output. You can enable the dashboard manually using the command-line interface. Although the monitoring stack components such as Prometheus, Grafana, Alertmanager, and node-exporter are deployed, they are disabled and you have to enable them manually. Prerequisite A running Red Hat Ceph Storage cluster installed with --skip-dashboard option during bootstrap. Root-level access to the host on which the dashboard needs to be enabled. Procedure Log into the Cephadm shell: Example Check the Ceph Manager services: Example You can see that the Dashboard URL is not configured. Enable the dashboard module: Example Create the self-signed certificate for the dashboard access: Example Note You can disable the certificate verification to avoid certification errors. Check the Ceph Manager services: Example Create the admin user and password to access the Red Hat Ceph Storage dashboard: Syntax Example Enable the monitoring stack. See the Enabling monitoring stack section in the Red Hat Ceph Storage Dashboard Guide for details. Additional Resources See the Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 2.10. Creating an admin account for syncing users to the Ceph dashboard You have to create an admin account to synchronize users to the Ceph dashboard. After creating the account, use Red Hat Single Sign-on (SSO) to synchronize users to the Ceph dashboard. See the Syncing users to the Ceph dashboard using Red Hat Single Sign-On section in the Red Hat Ceph Storage Dashboard Guide . Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level access to the dashboard. Users are added to the dashboard. Root-level access on all the hosts. Red hat Single Sign-On installed from a ZIP file. See the Installing Red Hat Single Sign-On from a zip file for additional information. Procedure Download the Red Hat Single Sign-On 7.4.0 Server on the system where Red Hat Ceph Storage is installed. Unzip the folder: Navigate to the standalone/configuration directory and open the standalone.xml for editing: Replace all instances of localhost and two instances of 127.0.0.1 with the IP address of the machine where Red Hat SSO is installed. Optional: For Red Hat Enterprise Linux 8, users might get Certificate Authority (CA) issues. Import the custom certificates from CA and move them into the keystore with the exact java version. Example To start the server from the bin directory of rh-sso-7.4 folder, run the standalone boot script: Create the admin account in https: IP_ADDRESS :8080/auth with a username and password: Note You have to create an admin account only the first time that you log into the console Log into the admin console with the credentials created. Additional Resources For adding roles for users on the dashboard, see the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. For creating users on the dashboard, see the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . 2.11. Syncing users from Red Hat Sign-On to the Ceph Dashboard You can use Red Hat Single Sign-on (SSO) with Lightweight Directory Access Protocol (LDAP) integration to synchronize users to the Red Hat Ceph Storage Dashboard. The users are added to specific realms in which they can access the dashboard through SSO without any additional requirements of a password. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Dashboard is installed. Admin level access to the dashboard. Users added to the dashboard. See the Creating users on Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . Root-level access on all the hosts. Admin account created for syncing users. See the Creating an admin account for syncing users to the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . Procedure To create a realm, click the Master drop-down menu. In this realm, you can provide access to users and applications. In the Add Realm window, enter a case-sensitive realm name and set the parameter Enabled to ON and click Create : In the Realm Settings tab, set the following parameters and click Save : Enabled - ON User-Managed Access - ON Make a note of the link address of SAML 2.0 Identity Provider Metadata to paste in Client Settings . In the Clients tab, click Create : In the Add Client window, set the following parameters and click Save : Client ID - BASE_URL:8443/auth/saml2/metadata Example https://example.ceph.redhat.com:8443/auth/saml2/metadata Client Protocol - saml In the Client window, under Settings tab, set the following parameters: Table 2.2. Client Settings tab Name of the parameter Syntax Example Client ID BASE_URL:8443/auth/saml2/metadata https://example.ceph.redhat.com:8443/auth/saml2/metadata Enabled ON ON Client Protocol saml saml Include AuthnStatement ON ON Sign Documents ON ON Signature Algorithm RSA_SHA1 RSA_SHA1 SAML Signature Key Name KEY_ID KEY_ID Valid Redirect URLs BASE_URL:8443/* https://example.ceph.redhat.com:8443/* Base URL BASE_URL:8443 https://example.ceph.redhat.com:8443/ Master SAML Processing URL https://localhost:8080/auth/realms/ REALM_NAME /protocol/saml/descriptor https://localhost:8080/auth/realms/Ceph_LDAP/protocol/saml/descriptor Note Paste the link of SAML 2.0 Identity Provider Metadata from Realm Settings tab. Under Fine Grain SAML Endpoint Configuration, set the following parameters and click Save : Table 2.3. Fine Grain SAML configuration Name of the parameter Syntax Example Assertion Consumer Service POST Binding URL BASE_URL:8443/#/dashboard https://example.ceph.redhat.com:8443/#/dashboard Assertion Consumer Service Redirect Binding URL BASE_URL:8443/#/dashboard https://example.ceph.redhat.com:8443/#/dashboard Logout Service Redirect Binding URL BASE_URL:8443/ https://example.ceph.redhat.com:8443/ In the Clients window, Mappers tab, set the following parameters and click Save : Table 2.4. Client Mappers tab Name of the parameter Value Protocol saml Name username Mapper Property User Property Property username SAML Attribute name username In the Clients Scope tab, select role_list : In Mappers tab, select role list , set the Single Role Attribute to ON. Select User_Federation tab: In User Federation window, select ldap from the drop-down menu: In User_Federation window, Settings tab, set the following parameters and click Save : Table 2.5. User Federation Settings tab Name of the parameter Value Console Display Name rh-ldap Import Users ON Edit_Mode READ_ONLY Username LDAP attribute username RDN LDAP attribute username UUID LDAP attribute nsuniqueid User Object Classes inetOrgPerson, organizationalPerson, rhatPerson Connection URL Example: ldap://ldap.corp.redhat.com Click Test Connection . You will get a notification that the LDAP connection is successful. Users DN ou=users, dc=example, dc=com Bind Type simple Click Test authentication . You will get a notification that the LDAP authentication is successful. In Mappers tab, select first name row and edit the following parameter and Click Save : LDAP Attribute - givenName In User_Federation tab, Settings tab, Click Synchronize all users : You will get a notification that the sync of users is finished successfully. In the Users tab, search for the user added to the dashboard and click the Search icon: To view the user , click the specific row. You should see the federation link as the name provided for the User Federation . Important Do not add users manually as the users will not be synchronized by LDAP. If added manually, delete the user by clicking Delete . Verification Users added to the realm and the dashboard can access the Ceph dashboard with their mail address and password. Example https://example.ceph.redhat.com:8443 Additional Resources For adding roles for users on the dashboard, see the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. 2.12. Enabling Single Sign-On for the Ceph Dashboard The Ceph Dashboard supports external authentication of users with the Security Assertion Markup Language (SAML) 2.0 protocol. Before using single sign-On (SSO) with the Ceph dashboard, create the dashboard user accounts and assign the desired roles. The Ceph Dashboard performs authorization of the users and the authentication process is performed by an existing Identity Provider (IdP). You can enable single sign-on using the SAML protocol. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard. Root-level access to The Ceph Manager hosts. Procedure To configure SSO on Ceph Dashboard, run the following command: Syntax Example Replace CEPH_MGR_HOST with Ceph mgr host. For example, host01 CEPH_DASHBOARD_BASE_URL with the base URL where Ceph Dashboard is accessible. IDP_METADATA with the URL to remote or local path or content of the IdP metadata XML. The supported URL types are http, https, and file. Optional : IDP_USERNAME_ATTRIBUTE with the attribute used to get the username from the authentication response. Defaults to uid . Optional : IDP_ENTITY_ID with the IdP entity ID when more than one entity ID exists on the IdP metadata. Optional : SP_X_509_CERT with the file path of the certificate used by Ceph Dashboard for signing and encryption. Optional : SP_PRIVATE_KEY with the file path of the private key used by Ceph Dashboard for signing and encryption. Verify the current SAML 2.0 configuration: Syntax Example To enable SSO, run the following command: Syntax Example Open your dashboard URL. Example On the SSO page, enter the login credentials. SSO redirects to the dashboard web interface. Additional Resources To disable single sign-on, see Disabling Single Sign-on for the Ceph Dashboard in the Red Hat Ceph StorageDashboard Guide . 2.13. Disabling Single Sign-On for the Ceph Dashboard You can disable single sign-on for Ceph Dashboard using the SAML 2.0 protocol. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard. Root-level access to The Ceph Manager hosts. Single sign-on enabled for Ceph Dashboard Procedure To view status of SSO, run the following command: Syntax Example To disable SSO, run the following command: Syntax Example Additional Resources To enable single sign-on, see Enabling Single Sign-on for the Ceph Dashboard in the Red Hat Ceph StorageDashboard Guide .
[ "URL: https://host01:8443/ User: admin Password: zbiql951ar", "cephadm bootstrap --mon-ip 127.0.0.1 --registry-json cephadm.txt --initial-dashboard-user admin --initial-dashboard-password zbiql951ar --dashboard-password-noupdate --allow-fqdn-hostname", "https:// HOST_NAME : PORT", "https://host01:8443", "ceph mgr services", "ssh-copy-id -f -i /etc/ceph/ceph.pub root@ HOST_NAME", "ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03", "cephadm shell", "ceph -s", "cephadm shell", "ceph dashboard feature status", "ceph dashboard feature disable rgw", "ceph dashboard feature enable cephfs", "https:// HOST_NAME :8443", "cephadm shell", "touch dashboard_password.yml", "vi dashboard_password.yml", "ceph dashboard ac-user-set-password DASHBOARD_USERNAME -i PASSWORD_FILE", "ceph dashboard ac-user-set-password admin -i dashboard_password.yml {\"username\": \"admin\", \"password\": \"USD2bUSD12USDi5RmvN1PolR61Fay0mPgt.GDpcga1QpYsaHUbJfoqaHd1rfFFx7XS\", \"roles\": [\"administrator\"], \"name\": null, \"email\": null, \"lastUpdate\": , \"enabled\": true, \"pwdExpirationDate\": null, \"pwdUpdateRequired\": false}", "service_type: grafana spec: initial_admin_password: PASSWORD", "service_type: grafana spec: initial_admin_password: mypassword", "cephadm shell --mount grafana.yml:/var/lib/ceph/grafana.yml", "ceph mgr module ls", "ceph mgr module enable dashboard", "ceph orch apply -i FILE_NAME .yml", "ceph orch apply -i /var/lib/ceph/grafana.yml", "ceph orch redeploy grafana", "https:// HOST_NAME : PORT", "https://host01:3000/", "cephadm shell", "ceph mgr services { \"prometheus\": \"http://10.8.0.101:9283/\" }", "ceph mgr module enable dashboard", "ceph dashboard create-self-signed-cert", "ceph mgr services { \"dashboard\": \"https://10.8.0.101:8443/\", \"prometheus\": \"http://10.8.0.101:9283/\" }", "echo -n \" PASSWORD \" > PASSWORD_FILE ceph dashboard ac-user-create admin -i PASSWORD_FILE administrator", "echo -n \"p@ssw0rd\" > password.txt ceph dashboard ac-user-create admin -i password.txt administrator", "unzip rhsso-7.4.0.zip", "cd standalone/configuration vi standalone.xml", "keytool -import -noprompt -trustcacerts -alias ca -file ../ca.cer -keystore /etc/java/java-1.8.0-openjdk/java-1.8.0-openjdk-1.8.0.272.b10-3.el8_3.x86_64/lib/security/cacert", "./standalone.sh", "cephadm shell CEPH_MGR_HOST ceph dashboard sso setup saml2 CEPH_DASHBOARD_BASE_URL IDP_METADATA IDP_USERNAME_ATTRIBUTE IDP_ENTITY_ID SP_X_509_CERT SP_PRIVATE_KEY", "cephadm shell host01 ceph dashboard sso setup saml2 https://dashboard_hostname.ceph.redhat.com:8443 idp-metadata.xml username https://10.70.59.125:8080/auth/realms/realm_name /home/certificate.txt /home/private-key.txt", "cephadm shell CEPH_MGR_HOST ceph dashboard sso show saml2", "cephadm shell host01 ceph dashboard sso show saml2", "cephadm shell CEPH_MGR_HOST ceph dashboard sso enable saml2 SSO is \"enabled\" with \"SAML2\" protocol.", "cephadm shell host01 ceph dashboard sso enable saml2", "https://dashboard_hostname.ceph.redhat.com:8443", "cephadm shell CEPH_MGR_HOST ceph dashboard sso status", "cephadm shell host01 ceph dashboard sso status SSO is \"enabled\" with \"SAML2\" protocol.", "cephadm shell CEPH_MGR_HOST ceph dashboard sso disable SSO is \"disabled\".", "cephadm shell host01 ceph dashboard sso disable" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/dashboard_guide/ceph-dashboard-installation-and-access
5.10. Configuring Fencing for Redundant Power Supplies
5.10. Configuring Fencing for Redundant Power Supplies When configuring fencing for redundant power supplies, the cluster must ensure that when attempting to reboot a host, both power supplies are turned off before either power supply is turned back on. If the node never completely loses power, the node may not release its resources. This opens up the possibility of nodes accessing these resources simultaneously and corrupting them. Prior to Red Hat Enterprise Linux 7.2, you needed to explicitly configure different versions of the devices which used either the 'on' or 'off' actions. Since Red Hat Enterprise Linux 7.2, it is now only required to define each device once and to specify that both are required to fence the node, as in the following example.
[ "pcs stonith create apc1 fence_apc_snmp ipaddr=apc1.example.com login=user passwd='7a4D#1j!pz864' pcmk_host_map=\"node1.example.com:1;node2.example.com:2\" pcs stonith create apc2 fence_apc_snmp ipaddr=apc2.example.com login=user passwd='7a4D#1j!pz864' pcmk_host_map=\"node1.example.com:1;node2.example.com:2\" pcs stonith level add 1 node1.example.com apc1,apc2 pcs stonith level add 1 node2.example.com apc1,apc2" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-redundantfence-HAAR
function::json_add_array_numeric_metric
function::json_add_array_numeric_metric Name function::json_add_array_numeric_metric - Add a numeric metric to an array Synopsis Arguments array_name The name of the array the numeric metric should be added to. metric_name The name of the numeric metric. metric_description Metric description. An empty string can be used. metric_units Metic units. An empty string can be used. Description This function adds a numeric metric to an array, setting up everything needed.
[ "json_add_array_numeric_metric:long(array_name:string,metric_name:string,metric_description:string,metric_units:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-json-add-array-numeric-metric
Chapter 5. Upgrading the Ansible plug-ins on a Helm installation on OpenShift Container Platform
Chapter 5. Upgrading the Ansible plug-ins on a Helm installation on OpenShift Container Platform To upgrade the Ansible plug-ins, you must update the plugin-registry application with the latest Ansible plug-ins files. 5.1. Downloading the Ansible plug-ins files Download the latest .tar file for the plug-ins from the Red Hat Ansible Automation Platform Product Software downloads page . The format of the filename is ansible-backstage-rhaap-bundle-x.y.z.tar.gz . Substitute the Ansible plug-ins release version, for example 1.0.0 , for x.y.z . Create a directory on your local machine to store the .tar files. USD mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme> Set an environment variable ( USDDYNAMIC_PLUGIN_ROOT_DIR ) to represent the directory path. USD export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme> Extract the ansible-backstage-rhaap-bundle-<version-number>.tar.gz contents to USDDYNAMIC_PLUGIN_ROOT_DIR . USD tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C USDDYNAMIC_PLUGIN_ROOT_DIR Substitute the Ansible plug-ins release version, for example 1.0.0 , for x.y.z . Verification Run ls to verify that the extracted files are in the USDDYNAMIC_PLUGIN_ROOT_DIR directory: USD ls USDDYNAMIC_PLUGIN_ROOT_DIR ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity The files with the .integrity file type contain the plugin SHA value. The SHA value is used during the plug-in configuration. 5.2. Updating the plug-in registry Rebuild your plug-in registry application in your OpenShift cluster with the latest Ansible plug-ins files. Prerequisites You have downloaded the Ansible plug-ins files. You have set an environment variable, for example ( USDDYNAMIC_PLUGIN_ROOT_DIR ), to represent the path to the local directory where you have stored the .tar files. Procedure Log in to your OpenShift Container Platform instance with credentials to create a new application. Open your Red Hat Developer Hub OpenShift project. USD oc project <YOUR_DEVELOPER_HUB_PROJECT> Run the following commands to update your plug-in registry build in the OpenShift cluster. The commands assume that USDDYNAMIC_PLUGIN_ROOT_DIR represents the directory for your .tar files. Replace this in the command if you have chosen a different environment variable name. USD oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait USD oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait When the registry has started, the output displays the following message: Uploading directory "/path/to/dynamic_plugin_root" as binary input for the build ... Uploading finished build.build.openshift.io/plugin-registry-1 started Verification Verify that the plugin-registry has been updated. In the OpenShift UI, click Topology . Click the redhat-developer-hub icon to view the pods for the plug-in registry. Click View logs for the plug-in registry pod. Open the Terminal tab and run ls to view the .tar files in the plug-in registry . Verify that the new .tar file has been uploaded. 5.3. Updating the Ansible plug-ins version numbers for a Helm installation Procedure Log in to your OpenShift Container Platform instance. In the OpenShift Developer UI, navigate to Helm developer-hub Actions Upgrade Yaml view . Update the Ansible plug-ins version numbers and associated .integrity file values. ... global: ... plugins: - disabled: false integrity: <SHA512 value> package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false integrity: <SHA512 value> package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - disabled: false integrity: <SHA512 value> package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null Click Upgrade . The developer hub pods restart and the plug-ins are installed. Verification In the OpenShift UI, click Topology . Make sure that the Red Hat Developer Hub instance is available.
[ "mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme>", "export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme>", "tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C USDDYNAMIC_PLUGIN_ROOT_DIR", "ls USDDYNAMIC_PLUGIN_ROOT_DIR ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity", "oc project <YOUR_DEVELOPER_HUB_PROJECT>", "oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait", "oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait", "Uploading directory \"/path/to/dynamic_plugin_root\" as binary input for the build ... Uploading finished build.build.openshift.io/plugin-registry-1 started", "global: plugins: - disabled: false integrity: <SHA512 value> package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false integrity: <SHA512 value> package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - disabled: false integrity: <SHA512 value> package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_ansible_plug-ins_for_red_hat_developer_hub/rhdh-upgrade-ocp-helm_aap-plugin-rhdh-installing
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_performance_and_sizing_guide/red-hat-data-grid
2.2. Data Roles
2.2. Data Roles All authenticated users have access to a VDB. To restrict access, configure data roles. Set these in the Teiid Designer or the dynamic VDB's META-INF/vdb.xml file. As part of the data role definition, you can map them to JAAS roles specified in <mapped-role-name> tags. You can set up JAAS roles using the web console. How these JAAS roles are associated with users depends on which particular JAAS login module you use. For instance, the default UsersRolesLoginModule associates users with JAAS roles in plain text files. For more information about data roles, see Red Hat JBoss Data Virtualization Development Guide: Reference Material . Important Do not use "admin" or "user" as JAAS role names as these are reserved specifically for Dashboard Builder permissions.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/data_roles3
Chapter 6. HostFirmwareSettings [metal3.io/v1alpha1]
Chapter 6. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings status object HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings 6.1.1. .spec Description HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings Type object Required settings Property Type Description settings integer-or-string Settings are the desired firmware settings stored as name/value pairs. 6.1.2. .status Description HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings Type object Required settings Property Type Description conditions array Track whether settings stored in the spec are valid based on the schema conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string Time that the status was last updated schema object FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec settings object (string) Settings are the firmware settings stored as name/value pairs 6.1.3. .status.conditions Description Track whether settings stored in the spec are valid based on the schema Type array 6.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 6.1.5. .status.schema Description FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec Type object Required name namespace Property Type Description name string name is the reference to the schema. namespace string namespace is the namespace of the where the schema is stored. 6.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hostfirmwaresettings GET : list objects of kind HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings DELETE : delete collection of HostFirmwareSettings GET : list objects of kind HostFirmwareSettings POST : create HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} DELETE : delete HostFirmwareSettings GET : read the specified HostFirmwareSettings PATCH : partially update the specified HostFirmwareSettings PUT : replace the specified HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status GET : read status of the specified HostFirmwareSettings PATCH : partially update status of the specified HostFirmwareSettings PUT : replace status of the specified HostFirmwareSettings 6.2.1. /apis/metal3.io/v1alpha1/hostfirmwaresettings HTTP method GET Description list objects of kind HostFirmwareSettings Table 6.1. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty 6.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings HTTP method DELETE Description delete collection of HostFirmwareSettings Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HostFirmwareSettings Table 6.3. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty HTTP method POST Description create HostFirmwareSettings Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 202 - Accepted HostFirmwareSettings schema 401 - Unauthorized Empty 6.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings HTTP method DELETE Description delete HostFirmwareSettings Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HostFirmwareSettings Table 6.10. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HostFirmwareSettings Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HostFirmwareSettings Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty 6.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status Table 6.16. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings HTTP method GET Description read status of the specified HostFirmwareSettings Table 6.17. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HostFirmwareSettings Table 6.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.19. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HostFirmwareSettings Table 6.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.21. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.22. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/provisioning_apis/hostfirmwaresettings-metal3-io-v1alpha1
Chapter 9. Introduction to Red Hat build of OptaPlanner
Chapter 9. Introduction to Red Hat build of OptaPlanner OptaPlanner is a lightweight, embeddable planning engine that optimizes planning problems. It helps normal Java programmers solve planning problems efficiently, and it combines optimization heuristics and metaheuristics with very efficient score calculations. For example, OptaPlanner helps solve various use cases: Employee/Patient Rosters : It helps create timetables for nurses and keeps track of patient bed management. Educational Timetables : It helps schedule lessons, courses, exams, and conference presentations. Shop Schedules : It tracks car assembly lines, machine queue planning, and workforce task planning. Cutting Stock : It minimizes waste by reducing the consumption of resources such as paper and steel. Every organization faces planning problems; that is, they provide products and services with a limited set of constrained resources (employees, assets, time, and money). OptaPlanner is open source software under the Apache Software License 2.0. It is 100% pure Java and runs on most Java virtual machines (JVMs). 9.1. Planning problems A planning problem has an optimal goal, based on limited resources and under specific constraints. Optimal goals can be any number of things, such as: Maximized profits - the optimal goal results in the highest possible profit. Minimized ecological footprint - the optimal goal has the least amount of environmental impact. Maximized satisfaction for employees or customers - the optimal goal prioritizes the needs of employees or customers. The ability to achieve these goals relies on the number of resources available. For example, the following resources might be limited: Number of people Amount of time Budget Physical assets, for example, machinery, vehicles, computers, buildings You must also take into account the specific constraints related to these resources, such as the number of hours a person works, their ability to use certain machines, or compatibility between pieces of equipment. Red Hat build of OptaPlanner helps Java programmers solve constraint satisfaction problems efficiently. It combines optimization heuristics and metaheuristics with efficient score calculation. 9.2. NP-completeness in planning problems The provided use cases are probably NP-complete or NP-hard , which means the following statements apply: It is easy to verify a specific solution to a problem in reasonable time. There is no simple way to find the optimal solution of a problem in reasonable time. The implication is that solving your problem is probably harder than you anticipated, because the two common techniques do not suffice: A brute force algorithm (even a more advanced variant) takes too long. A quick algorithm, for example in the bin packing problem , putting in the largest items first returns a solution that is far from optimal. By using advanced optimization algorithms, OptaPlanner finds a good solution in reasonable time for such planning problems. 9.3. Solutions to planning problems A planning problem has a number of solutions. Several categories of solutions are: Possible solution A possible solution is any solution, whether or not it breaks any number of constraints. Planning problems often have an incredibly large number of possible solutions. Many of those solutions are not useful. Feasible solution A feasible solution is a solution that does not break any (negative) hard constraints. The number of feasible solutions are relative to the number of possible solutions. Sometimes there are no feasible solutions. Every feasible solution is a possible solution. Optimal solution Optimal solutions are the solutions with the highest scores. Planning problems usually have a few optimal solutions. They always have at least one optimal solution, even in the case that there are no feasible solutions and the optimal solution is not feasible. Best solution found The best solution is the solution with the highest score found by an implementation in a specified amount of time. The best solution found is likely to be feasible and, given enough time, it's an optimal solution. Counterintuitively, the number of possible solutions is huge (if calculated correctly), even with a small data set. In the examples provided in the planner-engine distribution folder, most instances have a large number of possible solutions. As there is no guaranteed way to find the optimal solution, any implementation is forced to evaluate at least a subset of all those possible solutions. OptaPlanner supports several optimization algorithms to efficiently wade through that incredibly large number of possible solutions. Depending on the use case, some optimization algorithms perform better than others, but it is impossible to know in advance. Using OptaPlanner, you can switch the optimization algorithm by changing the solver configuration in a few lines of XML or code. 9.4. Constraints on planning problems Usually, a planning problem has minimum two levels of constraints: A (negative) hard constraint must not be broken. For example, one teacher can not teach two different lessons at the same time. A (negative) soft constraint should not be broken if it can be avoided. For example, Teacher A does not like to teach on Friday afternoons. Some problems also have positive constraints: A positive soft constraint (or reward) should be fulfilled if possible. For example, Teacher B likes to teach on Monday mornings. Some basic problems only have hard constraints. Some problems have three or more levels of constraints, for example, hard, medium, and soft constraints. These constraints define the score calculation (otherwise known as the fitness function ) of a planning problem. Each solution of a planning problem is graded with a score. With OptaPlanner, score constraints are written in an object oriented language such as Java, or in Drools rules. This type of code is flexible and scalable. 9.5. Examples provided with Red Hat build of OptaPlanner Several Red Hat build of OptaPlanner examples are shipped with Red Hat Decision Manager. You can review the code for examples and modify it as necessary to suit your needs. Note Red Hat does not provide support for the example code included in the Red Hat Decision Manager distribution. Some of the OptaPlanner examples solve problems that are presented in academic contests. The Contest column in the following table lists the contests. It also identifies an example as being either realistic or unrealistic for the purpose of a contest. A realistic contest is an official, independent contest that meets the following standards: Clearly defined real-world use cases Real-world constraints Multiple real-world datasets Reproducible results within a specific time limit on specific hardware Serious participation from the academic and/or enterprise Operations Research community. Realistic contests provide an objective comparison of OptaPlanner with competitive software and academic research. Table 9.1. Examples overview Example Domain Size Contest Directory name N queens 1 entity class (1 variable) Entity ⇐ 256 Value ⇐ 256 Search space ⇐ 10^616 Pointless (cheatable) nqueens Cloud balancing 1 entity class (1 variable) Entity ⇐ 2400 Value ⇐ 800 Search space ⇐ 10^6967 No (Defined by us) cloudbalancing Traveling salesman 1 entity class (1 chained variable) Entity ⇐ 980 Value ⇐ 980 Search space ⇐ 10^2504 Unrealistic TSP web tsp Tennis club scheduling 1 entity class (1 variable) Entity ⇐ 72 Value ⇐ 7 Search space ⇐ 10^60 No (Defined by us) tennis Meeting scheduling 1 entity class (2 variables) Entity ⇐ 10 Value ⇐ 320 and ⇐ 5 Search space ⇐ 10^320 No (Defined by us) meetingscheduling Course timetabling 1 entity class (2 variables) Entity ⇐ 434 Value ⇐ 25 and ⇐ 20 Search space ⇐ 10^1171 Realistic ITC 2007 track 3 curriculumCourse Machine reassignment 1 entity class (1 variable) Entity ⇐ 50000 Value ⇐ 5000 Search space ⇐ 10^184948 Nearly realistic ROADEF 2012 machineReassignment Vehicle routing 1 entity class (1 chained variable) 1 shadow entity class (1 automatic shadow variable) Entity ⇐ 2740 Value ⇐ 2795 Search space ⇐ 10^8380 Unrealistic VRP web vehiclerouting Vehicle routing with time windows All of Vehicle routing (1 shadow variable) Entity ⇐ 2740 Value ⇐ 2795 Search space ⇐ 10^8380 Unrealistic VRP web vehiclerouting Project job scheduling 1 entity class (2 variables) (1 shadow variable) Entity ⇐ 640 Value ⇐ ? and ⇐ ? Search space ⇐ ? Nearly realistic MISTA 2013 projectjobscheduling Task assigning 1 entity class (1 chained variable) (1 shadow variable) 1 shadow entity class (1 automatic shadow variable) Entity ⇐ 500 Value ⇐ 520 Search space ⇐ 10^1168 No Defined by us taskassigning Exam timetabling 2 entity classes (same hierarchy) (2 variables) Entity ⇐ 1096 Value ⇐ 80 and ⇐ 49 Search space ⇐ 10^3374 Realistic ITC 2007 track 1 examination Nurse rostering 1 entity class (1 variable) Entity ⇐ 752 Value ⇐ 50 Search space ⇐ 10^1277 Realistic INRC 2010 nurserostering Traveling tournament 1 entity class (1 variable) Entity ⇐ 1560 Value ⇐ 78 Search space ⇐ 10^2301 Unrealistic TTP travelingtournament Cheap time scheduling 1 entity class (2 variables) Entity ⇐ 500 Value ⇐ 100 and ⇐ 288 Search space ⇐ 10^20078 Nearly realistic ICON Energy cheaptimescheduling Investment 1 entity class (1 variable) Entity ⇐ 11 Value = 1000 Search space ⇐ 10^4 No Defined by us investment Conference scheduling 1 entity class (2 variables) Entity ⇐ 216 Value ⇐ 18 and ⇐ 20 Search space ⇐ 10^552 No Defined by us conferencescheduling Rock tour 1 entity class (1 chained variable) (4 shadow variables) 1 shadow entity class (1 automatic shadow variable) Entity ⇐ 47 Value ⇐ 48 Search space ⇐ 10^59 No Defined by us rocktour Flight crew scheduling 1 entity class (1 variable) 1 shadow entity class (1 automatic shadow variable) Entity ⇐ 4375 Value ⇐ 750 Search space ⇐ 10^12578 No Defined by us flightcrewscheduling 9.6. N queens Place n number of queens on an n sized chessboard so that no two queens can attack each other. The most common n queens puzzle is the eight queens puzzle, with n = 8 : Constraints: Use a chessboard of n columns and n rows. Place n queens on the chessboard. No two queens can attack each other. A queen can attack any other queen on the same horizontal, vertical, or diagonal line. This documentation heavily uses the four queens puzzle as the primary example. A proposed solution could be: Figure 9.1. A wrong solution for the four queens puzzle The above solution is wrong because queens A1 and B0 can attack each other (so can queens B0 and D0 ). Removing queen B0 would respect the "no two queens can attack each other" constraint, but would break the "place n queens" constraint. Below is a correct solution: Figure 9.2. A correct solution for the Four queens puzzle All the constraints have been met, so the solution is correct. Note that most n queens puzzles have multiple correct solutions. We will focus on finding a single correct solution for a specific n , not on finding the number of possible correct solutions for a specific n . Problem size The implementation of the n queens example has not been optimized because it functions as a beginner example. Nevertheless, it can easily handle 64 queens. With a few changes it has been shown to easily handle 5000 queens and more. 9.6.1. Domain model for N queens This example uses the domain model to solve the four queens problem. Creating a Domain Model A good domain model will make it easier to understand and solve your planning problem. This is the domain model for the n queens example: public class Column { private int index; // ... getters and setters } public class Row { private int index; // ... getters and setters } public class Queen { private Column column; private Row row; public int getAscendingDiagonalIndex() {...} public int getDescendingDiagonalIndex() {...} // ... getters and setters } Calculating the Search Space. A Queen instance has a Column (for example: 0 is column A, 1 is column B, ... ) and a Row (its row, for example: 0 is row 0, 1 is row 1, ... ). The ascending diagonal line and the descending diagonal line can be calculated based on the column and the row. The column and row indexes start from the upper left corner of the chessboard. public class NQueens { private int n; private List<Column> columnList; private List<Row> rowList; private List<Queen> queenList; private SimpleScore score; // ... getters and setters } Finding the Solution A single NQueens instance contains a list of all Queen instances. It is the Solution implementation which will be supplied to, solved by, and retrieved from the Solver. Notice that in the four queens example, the NQueens getN() method will always return four. Figure 9.3. A solution for Four Queens Table 9.2. Details of the solution in the domain model columnIndex rowIndex ascendingDiagonalIndex (columnIndex + rowIndex) descendingDiagonalIndex (columnIndex - rowIndex) A1 0 1 1 (**) -1 B0 1 0 (*) 1 (**) 1 C2 2 2 4 0 D0 3 0 (*) 3 3 When two queens share the same column, row or diagonal line, such as (*) and (**), they can attack each other. 9.7. Cloud balancing For information about this example, see Red Hat build of OptaPlanner quick start guides . 9.8. Traveling salesman (TSP - Traveling Salesman Problem) Given a list of cities, find the shortest tour for a salesman that visits each city exactly once. The problem is defined by Wikipedia . It is one of the most intensively studied problems in computational mathematics. Yet, in the real world, it is often only part of a planning problem, along with other constraints, such as employee shift rostering constraints. Problem size Problem difficulty Despite TSP's simple definition, the problem is surprisingly hard to solve. Because it is an NP-hard problem (like most planning problems), the optimal solution for a specific problem dataset can change a lot when that problem dataset is slightly altered: 9.9. Tennis club scheduling Every week the tennis club has four teams playing round robin against each other. Assign those four spots to the teams fairly. Hard constraints: Conflict: A team can only play once per day. Unavailability: Some teams are unavailable on some dates. Medium constraints: Fair assignment: All teams should play an (almost) equal number of times. Soft constraints: Evenly confrontation: Each team should play against every other team an equal number of times. Problem size Figure 9.4. Domain model 9.10. Meeting scheduling Assign each meeting to a starting time and a room. Meetings have different durations. Hard constraints: Room conflict: Two meetings must not use the same room at the same time. Required attendance: A person cannot have two required meetings at the same time. Required room capacity: A meeting must not be in a room that doesn't fit all of the meeting's attendees. Start and end on same day: A meeting shouldn't be scheduled over multiple days. Medium constraints: Preferred attendance: A person cannot have two preferred meetings at the same time, nor a preferred and a required meeting at the same time. Soft constraints: Sooner rather than later: Schedule all meetings as soon as possible. A break between meetings: Any two meetings should have at least one time grain break between them. Overlapping meetings: To minimize the number of meetings in parallel so people don't have to choose one meeting over the other. Assign larger rooms first: If a larger room is available any meeting should be assigned to that room in order to accommodate as many people as possible even if they haven't signed up to that meeting. Room stability: If a person has two consecutive meetings with two or less time grains break between them they better be in the same room. Problem size 9.11. Course timetabling (ITC 2007 Track 3 - Curriculum Course Scheduling) Schedule each lecture into a timeslot and into a room. Hard constraints: Teacher conflict: A teacher must not have two lectures in the same period. Curriculum conflict: A curriculum must not have two lectures in the same period. Room occupancy: Two lectures must not be in the same room in the same period. Unavailable period (specified per dataset): A specific lecture must not be assigned to a specific period. Soft constraints: Room capacity: A room's capacity should not be less than the number of students in its lecture. Minimum working days: Lectures of the same course should be spread out into a minimum number of days. Curriculum compactness: Lectures belonging to the same curriculum should be adjacent to each other (so in consecutive periods). Room stability: Lectures of the same course should be assigned to the same room. The problem is defined by the International Timetabling Competition 2007 track 3 . Problem size Figure 9.5. Domain model 9.12. Machine reassignment (Google ROADEF 2012) Assign each process to a machine. All processes already have an original (unoptimized) assignment. Each process requires an amount of each resource (such as CPU or RAM). This is a more complex version of the Cloud Balancing example. Hard constraints: Maximum capacity: The maximum capacity for each resource for each machine must not be exceeded. Conflict: Processes of the same service must run on distinct machines. Spread: Processes of the same service must be spread out across locations. Dependency: The processes of a service depending on another service must run in the neighborhood of a process of the other service. Transient usage: Some resources are transient and count towards the maximum capacity of both the original machine as the newly assigned machine. Soft constraints: Load: The safety capacity for each resource for each machine should not be exceeded. Balance: Leave room for future assignments by balancing the available resources on each machine. Process move cost: A process has a move cost. Service move cost: A service has a move cost. Machine move cost: Moving a process from machine A to machine B has another A-B specific move cost. The problem is defined by the Google ROADEF/EURO Challenge 2012 . Figure 9.6. Value proposition Problem size Figure 9.7. Domain model 9.13. Vehicle routing Using a fleet of vehicles, pick up the objects of each customer and bring them to the depot. Each vehicle can service multiple customers, but it has a limited capacity. Besides the basic case (CVRP), there is also a variant with time windows (CVRPTW). Hard constraints: Vehicle capacity: a vehicle cannot carry more items then its capacity. Time windows (only in CVRPTW): Travel time: Traveling from one location to another takes time. Customer service duration: A vehicle must stay at the customer for the length of the service duration. Customer ready time: A vehicle may arrive before the customer's ready time, but it must wait until the ready time before servicing. Customer due time: A vehicle must arrive on time, before the customer's due time. Soft constraints: Total distance: Minimize the total distance driven (fuel consumption) of all vehicles. The capacitated vehicle routing problem (CVRP) and its time-windowed variant (CVRPTW) are defined by the VRP web. Figure 9.8. Value proposition Problem size CVRP instances (without time windows): CVRPTW instances (with time windows): 9.13.1. Domain model for Vehicle routing The vehicle routing with time windows domain model makes heavy use of the shadow variable feature. This allows it to express its constraints more naturally, because properties such as arrivalTime and departureTime , are directly available on the domain model. Road Distances Instead of Air Distances In the real world, vehicles cannot follow a straight line from location to location: they have to use roads and highways. From a business point of view, this matters a lot: For the optimization algorithm, this does not matter much, as long as the distance between two points can be looked up (and are preferably precalculated). The road cost does not even need to be a distance. It can also be travel time, fuel cost, or a weighted function of those. There are several technologies available to precalculate road costs, such as GraphHopper (embeddable, offline Java engine), Open MapQuest (web service) and Google Maps Client API (web service). There are also several technologies to render it, such as Leaflet and Google Maps for developers . It is even possible to render the actual road routes with GraphHopper or Google Map Directions, but because of route overlaps on highways, it can become harder to see the standstill order: Take special care that the road costs between two points use the same optimization criteria as the one used in OptaPlanner. For example, GraphHopper will by default return the fastest route, not the shortest route. Don't use the km (or miles) distances of the fastest GPS routes to optimize the shortest trip in OptaPlanner: this leads to a suboptimal solution as shown below: Contrary to popular belief, most users do not want the shortest route: they want the fastest route instead. They prefer highways over normal roads. They prefer normal roads over dirt roads. In the real world, the fastest and shortest route are rarely the same. 9.14. Project job scheduling Schedule all jobs in time and execution mode to minimize project delays. Each job is part of a project. A job can be executed in different ways: each way is an execution mode that implies a different duration but also different resource usages. This is a form of flexible job shop scheduling . Hard constraints: Job precedence: a job can only start when all its predecessor jobs are finished. Resource capacity: do not use more resources than available. Resources are local (shared between jobs of the same project) or global (shared between all jobs) Resources are renewable (capacity available per day) or nonrenewable (capacity available for all days) Medium constraints: Total project delay: minimize the duration (makespan) of each project. Soft constraints: Total makespan: minimize the duration of the whole multi-project schedule. The problem is defined by the MISTA 2013 challenge . Problem size 9.15. Task assigning Assign each task to a spot in an employee's queue. Each task has a duration which is affected by the employee's affinity level with the task's customer. Hard constraints: Skill: Each task requires one or more skills. The employee must possess all these skills. Soft level 0 constraints: Critical tasks: Complete critical tasks first, sooner than major and minor tasks. Soft level 1 constraints: Minimize makespan: Reduce the time to complete all tasks. Start with the longest working employee first, then the second longest working employee and so forth, to create fairness and load balancing. Soft level 2 constraints: Major tasks: Complete major tasks as soon as possible, sooner than minor tasks. Soft level 3 constraints: Minor tasks: Complete minor tasks as soon as possible. Figure 9.9. Value proposition Problem size Figure 9.10. Domain model 9.16. Exam timetabling (ITC 2007 track 1 - Examination) Schedule each exam into a period and into a room. Multiple exams can share the same room during the same period. Hard constraints: Exam conflict: Two exams that share students must not occur in the same period. Room capacity: A room's seating capacity must suffice at all times. Period duration: A period's duration must suffice for all of its exams. Period related hard constraints (specified per dataset): Coincidence: Two specified exams must use the same period (but possibly another room). Exclusion: Two specified exams must not use the same period. After: A specified exam must occur in a period after another specified exam's period. Room related hard constraints (specified per dataset): Exclusive: One specified exam should not have to share its room with any other exam. Soft constraints (each of which has a parametrized penalty): The same student should not have two exams in a row. The same student should not have two exams on the same day. Period spread: Two exams that share students should be a number of periods apart. Mixed durations: Two exams that share a room should not have different durations. Front load: Large exams should be scheduled earlier in the schedule. Period penalty (specified per dataset): Some periods have a penalty when used. Room penalty (specified per dataset): Some rooms have a penalty when used. It uses large test data sets of real-life universities. The problem is defined by the International Timetabling Competition 2007 track 1 . Geoffrey De Smet finished 4th in that competition with a very early version of OptaPlanner. Many improvements have been made since then. Problem Size 9.16.1. Domain model for exam timetabling The following diagram shows the main examination domain classes: Figure 9.11. Examination domain class diagram Notice that we've split up the exam concept into an Exam class and a Topic class. The Exam instances change during solving (this is the planning entity class), when their period or room property changes. The Topic , Period and Room instances never change during solving (these are problem facts, just like some other classes). 9.17. Nurse rostering (INRC 2010) For each shift, assign a nurse to work that shift. Hard constraints: No unassigned shifts (built-in): Every shift need to be assigned to an employee. Shift conflict : An employee can have only one shift per day. Soft constraints: Contract obligations. The business frequently violates these, so they decided to define these as soft constraints instead of hard constraints. Minimum and maximum assignments : Each employee needs to work more than x shifts and less than y shifts (depending on their contract). Minimum and maximum consecutive working days : Each employee needs to work between x and y days in a row (depending on their contract). Minimum and maximum consecutive free days : Each employee needs to be free between x and y days in a row (depending on their contract). Minimum and maximum consecutive working weekends : Each employee needs to work between x and y weekends in a row (depending on their contract). Complete weekends : Each employee needs to work every day in a weekend or not at all. Identical shift types during weekend : Each weekend shift for the same weekend of the same employee must be the same shift type. Unwanted patterns : A combination of unwanted shift types in a row, for example a late shift followed by an early shift followed by a late shift. Employee wishes: Day on request : An employee wants to work on a specific day. Day off request : An employee does not want to work on a specific day. Shift on request : An employee wants to be assigned to a specific shift. Shift off request : An employee does not want to be assigned to a specific shift. Alternative skill : An employee assigned to a skill should have a proficiency in every skill required by that shift. The problem is defined by the International Nurse Rostering Competition 2010 . Figure 9.12. Value proposition Problem size There are three dataset types: Sprint: must be solved in seconds. Medium: must be solved in minutes. Long: must be solved in hours. Figure 9.13. Domain model 9.18. Traveling tournament problem (TTP) Schedule matches between n number of teams. Hard constraints: Each team plays twice against every other team: once home and once away. Each team has exactly one match on each timeslot. No team must have more than three consecutive home or three consecutive away matches. No repeaters: no two consecutive matches of the same two opposing teams. Soft constraints: Minimize the total distance traveled by all teams. The problem is defined on Michael Trick's website (which contains the world records too) . Problem size 9.19. Cheap time scheduling Schedule all tasks in time and on a machine to minimize power cost. Power prices differ in time. This is a form of job shop scheduling . Hard constraints: Start time limits: Each task must start between its earliest start and latest start limit. Maximum capacity: The maximum capacity for each resource for each machine must not be exceeded. Startup and shutdown: Each machine must be active in the periods during which it has assigned tasks. Between tasks it is allowed to be idle to avoid startup and shutdown costs. Medium constraints: Power cost: Minimize the total power cost of the whole schedule. Machine power cost: Each active or idle machine consumes power, which infers a power cost (depending on the power price during that time). Task power cost: Each task consumes power too, which infers a power cost (depending on the power price during its time). Machine startup and shutdown cost: Every time a machine starts up or shuts down, an extra cost is incurred. Soft constraints (addendum to the original problem definition): Start early: Prefer starting a task sooner rather than later. The problem is defined by the ICON challenge . Problem size 9.20. Investment asset class allocation (Portfolio Optimization) Decide the relative quantity to invest in each asset class. Hard constraints: Risk maximum: the total standard deviation must not be higher than the standard deviation maximum. Total standard deviation calculation takes asset class correlations into account by applying Markowitz Portfolio Theory . Region maximum: Each region has a quantity maximum. Sector maximum: Each sector has a quantity maximum. Soft constraints: Maximize expected return. Problem size Larger datasets have not been created or tested yet, but should not pose a problem. A good source of data is this Asset Correlation website . 9.21. Conference scheduling Assign each conference talk to a timeslot and a room. Timeslots can overlap. Read and write to and from an *.xlsx file that can be edited with LibreOffice or Excel. Hard constraints: Talk type of timeslot: The type of a talk must match the timeslot's talk type. Room unavailable timeslots: A talk's room must be available during the talk's timeslot. Room conflict: Two talks can't use the same room during overlapping timeslots. Speaker unavailable timeslots: Every talk's speaker must be available during the talk's timeslot. Speaker conflict: Two talks can't share a speaker during overlapping timeslots. Generic purpose timeslot and room tags: Speaker required timeslot tag: If a speaker has a required timeslot tag, then all of his or her talks must be assigned to a timeslot with that tag. Speaker prohibited timeslot tag: If a speaker has a prohibited timeslot tag, then all of his or her talks cannot be assigned to a timeslot with that tag. Talk required timeslot tag: If a talk has a required timeslot tag, then it must be assigned to a timeslot with that tag. Talk prohibited timeslot tag: If a talk has a prohibited timeslot tag, then it cannot be assigned to a timeslot with that tag. Speaker required room tag: If a speaker has a required room tag, then all of his or her talks must be assigned to a room with that tag. Speaker prohibited room tag: If a speaker has a prohibited room tag, then all of his or her talks cannot be assigned to a room with that tag. Talk required room tag: If a talk has a required room tag, then it must be assigned to a room with that tag. Talk prohibited room tag: If a talk has a prohibited room tag, then it cannot be assigned to a room with that tag. Talk mutually-exclusive-talks tag: Talks that share such a tag must not be scheduled in overlapping timeslots. Talk prerequisite talks: A talk must be scheduled after all its prerequisite talks. Soft constraints: Theme track conflict: Minimize the number of talks that share a theme tag during overlapping timeslots. Sector conflict: Minimize the number of talks that share a same sector tag during overlapping timeslots. Content audience level flow violation: For every content tag, schedule the introductory talks before the advanced talks. Audience level diversity: For every timeslot, maximize the number of talks with a different audience level. Language diversity: For every timeslot, maximize the number of talks with a different language. Generic purpose timeslot and room tags: Speaker preferred timeslot tag: If a speaker has a preferred timeslot tag, then all of his or her talks should be assigned to a timeslot with that tag. Speaker undesired timeslot tag: If a speaker has an undesired timeslot tag, then none of his or her talks should be assigned to a timeslot with that tag. Talk preferred timeslot tag: If a talk has a preferred timeslot tag, then it should be assigned to a timeslot with that tag. Talk undesired timeslot tag: If a talk has an undesired timeslot tag, then it should not be assigned to a timeslot with that tag. Speaker preferred room tag: If a speaker has a preferred room tag, then all of his or her talks should be assigned to a room with that tag. Speaker undesired room tag: If a speaker has an undesired room tag, then none of his or her talks should be assigned to a room with that tag. Talk preferred room tag: If a talk has a preferred room tag, then it should be assigned to a room with that tag. Talk undesired room tag: If a talk has an undesired room tag, then it should not be assigned to a room with that tag. Same day talks: All talks that share a theme tag or content tag should be scheduled in the minimum number of days (ideally in the same day). Figure 9.14. Value proposition Problem size 9.22. Rock tour Drive the rock bank bus from show to show, but schedule shows only on available days. Hard constraints: Schedule every required show. Schedule as many shows as possible. Medium constraints: Maximize revenue opportunity. Minimize driving time. Visit sooner than later. Soft constraints: Avoid long driving times. Problem size 9.23. Flight crew scheduling Assign flights to pilots and flight attendants. Hard constraints: Required skill: each flight assignment has a required skill. For example, flight AB0001 requires 2 pilots and 3 flight attendants. Flight conflict: each employee can only attend one flight at the same time Transfer between two flights: between two flights, an employee must be able to transfer from the arrival airport to the departure airport. For example, Ann arrives in Brussels at 10:00 and departs in Amsterdam at 15:00. Employee unavailability: the employee must be available on the day of the flight. For example, Ann is on PTO on 1-Feb. Soft constraints: First assignment departing from home Last assignment arriving at home Load balance flight duration total per employee Problem size
[ "4queens has 4 queens with a search space of 256. 8queens has 8 queens with a search space of 10^7. 16queens has 16 queens with a search space of 10^19. 32queens has 32 queens with a search space of 10^48. 64queens has 64 queens with a search space of 10^115. 256queens has 256 queens with a search space of 10^616.", "public class Column { private int index; // ... getters and setters }", "public class Row { private int index; // ... getters and setters }", "public class Queen { private Column column; private Row row; public int getAscendingDiagonalIndex() {...} public int getDescendingDiagonalIndex() {...} // ... getters and setters }", "public class NQueens { private int n; private List<Column> columnList; private List<Row> rowList; private List<Queen> queenList; private SimpleScore score; // ... getters and setters }", "dj38 has 38 cities with a search space of 10^43. europe40 has 40 cities with a search space of 10^46. st70 has 70 cities with a search space of 10^98. pcb442 has 442 cities with a search space of 10^976. lu980 has 980 cities with a search space of 10^2504.", "munich-7teams has 7 teams, 18 days, 12 unavailabilityPenalties and 72 teamAssignments with a search space of 10^60.", "50meetings-160timegrains-5rooms has 50 meetings, 160 timeGrains and 5 rooms with a search space of 10^145. 100meetings-320timegrains-5rooms has 100 meetings, 320 timeGrains and 5 rooms with a search space of 10^320. 200meetings-640timegrains-5rooms has 200 meetings, 640 timeGrains and 5 rooms with a search space of 10^701. 400meetings-1280timegrains-5rooms has 400 meetings, 1280 timeGrains and 5 rooms with a search space of 10^1522. 800meetings-2560timegrains-5rooms has 800 meetings, 2560 timeGrains and 5 rooms with a search space of 10^3285.", "comp01 has 24 teachers, 14 curricula, 30 courses, 160 lectures, 30 periods, 6 rooms and 53 unavailable period constraints with a search space of 10^360. comp02 has 71 teachers, 70 curricula, 82 courses, 283 lectures, 25 periods, 16 rooms and 513 unavailable period constraints with a search space of 10^736. comp03 has 61 teachers, 68 curricula, 72 courses, 251 lectures, 25 periods, 16 rooms and 382 unavailable period constraints with a search space of 10^653. comp04 has 70 teachers, 57 curricula, 79 courses, 286 lectures, 25 periods, 18 rooms and 396 unavailable period constraints with a search space of 10^758. comp05 has 47 teachers, 139 curricula, 54 courses, 152 lectures, 36 periods, 9 rooms and 771 unavailable period constraints with a search space of 10^381. comp06 has 87 teachers, 70 curricula, 108 courses, 361 lectures, 25 periods, 18 rooms and 632 unavailable period constraints with a search space of 10^957. comp07 has 99 teachers, 77 curricula, 131 courses, 434 lectures, 25 periods, 20 rooms and 667 unavailable period constraints with a search space of 10^1171. comp08 has 76 teachers, 61 curricula, 86 courses, 324 lectures, 25 periods, 18 rooms and 478 unavailable period constraints with a search space of 10^859. comp09 has 68 teachers, 75 curricula, 76 courses, 279 lectures, 25 periods, 18 rooms and 405 unavailable period constraints with a search space of 10^740. comp10 has 88 teachers, 67 curricula, 115 courses, 370 lectures, 25 periods, 18 rooms and 694 unavailable period constraints with a search space of 10^981. comp11 has 24 teachers, 13 curricula, 30 courses, 162 lectures, 45 periods, 5 rooms and 94 unavailable period constraints with a search space of 10^381. comp12 has 74 teachers, 150 curricula, 88 courses, 218 lectures, 36 periods, 11 rooms and 1368 unavailable period constraints with a search space of 10^566. comp13 has 77 teachers, 66 curricula, 82 courses, 308 lectures, 25 periods, 19 rooms and 468 unavailable period constraints with a search space of 10^824. comp14 has 68 teachers, 60 curricula, 85 courses, 275 lectures, 25 periods, 17 rooms and 486 unavailable period constraints with a search space of 10^722.", "model_a1_1 has 2 resources, 1 neighborhoods, 4 locations, 4 machines, 79 services, 100 processes and 1 balancePenalties with a search space of 10^60. model_a1_2 has 4 resources, 2 neighborhoods, 4 locations, 100 machines, 980 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a1_3 has 3 resources, 5 neighborhoods, 25 locations, 100 machines, 216 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a1_4 has 3 resources, 50 neighborhoods, 50 locations, 50 machines, 142 services, 1000 processes and 1 balancePenalties with a search space of 10^1698. model_a1_5 has 4 resources, 2 neighborhoods, 4 locations, 12 machines, 981 services, 1000 processes and 1 balancePenalties with a search space of 10^1079. model_a2_1 has 3 resources, 1 neighborhoods, 1 locations, 100 machines, 1000 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a2_2 has 12 resources, 5 neighborhoods, 25 locations, 100 machines, 170 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a2_3 has 12 resources, 5 neighborhoods, 25 locations, 100 machines, 129 services, 1000 processes and 0 balancePenalties with a search space of 10^2000. model_a2_4 has 12 resources, 5 neighborhoods, 25 locations, 50 machines, 180 services, 1000 processes and 1 balancePenalties with a search space of 10^1698. model_a2_5 has 12 resources, 5 neighborhoods, 25 locations, 50 machines, 153 services, 1000 processes and 0 balancePenalties with a search space of 10^1698. model_b_1 has 12 resources, 5 neighborhoods, 10 locations, 100 machines, 2512 services, 5000 processes and 0 balancePenalties with a search space of 10^10000. model_b_2 has 12 resources, 5 neighborhoods, 10 locations, 100 machines, 2462 services, 5000 processes and 1 balancePenalties with a search space of 10^10000. model_b_3 has 6 resources, 5 neighborhoods, 10 locations, 100 machines, 15025 services, 20000 processes and 0 balancePenalties with a search space of 10^40000. model_b_4 has 6 resources, 5 neighborhoods, 50 locations, 500 machines, 1732 services, 20000 processes and 1 balancePenalties with a search space of 10^53979. model_b_5 has 6 resources, 5 neighborhoods, 10 locations, 100 machines, 35082 services, 40000 processes and 0 balancePenalties with a search space of 10^80000. model_b_6 has 6 resources, 5 neighborhoods, 50 locations, 200 machines, 14680 services, 40000 processes and 1 balancePenalties with a search space of 10^92041. model_b_7 has 6 resources, 5 neighborhoods, 50 locations, 4000 machines, 15050 services, 40000 processes and 1 balancePenalties with a search space of 10^144082. model_b_8 has 3 resources, 5 neighborhoods, 10 locations, 100 machines, 45030 services, 50000 processes and 0 balancePenalties with a search space of 10^100000. model_b_9 has 3 resources, 5 neighborhoods, 100 locations, 1000 machines, 4609 services, 50000 processes and 1 balancePenalties with a search space of 10^150000. model_b_10 has 3 resources, 5 neighborhoods, 100 locations, 5000 machines, 4896 services, 50000 processes and 1 balancePenalties with a search space of 10^184948.", "belgium-n50-k10 has 1 depots, 10 vehicles and 49 customers with a search space of 10^74. belgium-n100-k10 has 1 depots, 10 vehicles and 99 customers with a search space of 10^170. belgium-n500-k20 has 1 depots, 20 vehicles and 499 customers with a search space of 10^1168. belgium-n1000-k20 has 1 depots, 20 vehicles and 999 customers with a search space of 10^2607. belgium-n2750-k55 has 1 depots, 55 vehicles and 2749 customers with a search space of 10^8380. belgium-road-km-n50-k10 has 1 depots, 10 vehicles and 49 customers with a search space of 10^74. belgium-road-km-n100-k10 has 1 depots, 10 vehicles and 99 customers with a search space of 10^170. belgium-road-km-n500-k20 has 1 depots, 20 vehicles and 499 customers with a search space of 10^1168. belgium-road-km-n1000-k20 has 1 depots, 20 vehicles and 999 customers with a search space of 10^2607. belgium-road-km-n2750-k55 has 1 depots, 55 vehicles and 2749 customers with a search space of 10^8380. belgium-road-time-n50-k10 has 1 depots, 10 vehicles and 49 customers with a search space of 10^74. belgium-road-time-n100-k10 has 1 depots, 10 vehicles and 99 customers with a search space of 10^170. belgium-road-time-n500-k20 has 1 depots, 20 vehicles and 499 customers with a search space of 10^1168. belgium-road-time-n1000-k20 has 1 depots, 20 vehicles and 999 customers with a search space of 10^2607. belgium-road-time-n2750-k55 has 1 depots, 55 vehicles and 2749 customers with a search space of 10^8380. belgium-d2-n50-k10 has 2 depots, 10 vehicles and 48 customers with a search space of 10^74. belgium-d3-n100-k10 has 3 depots, 10 vehicles and 97 customers with a search space of 10^170. belgium-d5-n500-k20 has 5 depots, 20 vehicles and 495 customers with a search space of 10^1168. belgium-d8-n1000-k20 has 8 depots, 20 vehicles and 992 customers with a search space of 10^2607. belgium-d10-n2750-k55 has 10 depots, 55 vehicles and 2740 customers with a search space of 10^8380. A-n32-k5 has 1 depots, 5 vehicles and 31 customers with a search space of 10^40. A-n33-k5 has 1 depots, 5 vehicles and 32 customers with a search space of 10^41. A-n33-k6 has 1 depots, 6 vehicles and 32 customers with a search space of 10^42. A-n34-k5 has 1 depots, 5 vehicles and 33 customers with a search space of 10^43. A-n36-k5 has 1 depots, 5 vehicles and 35 customers with a search space of 10^46. A-n37-k5 has 1 depots, 5 vehicles and 36 customers with a search space of 10^48. A-n37-k6 has 1 depots, 6 vehicles and 36 customers with a search space of 10^49. A-n38-k5 has 1 depots, 5 vehicles and 37 customers with a search space of 10^49. A-n39-k5 has 1 depots, 5 vehicles and 38 customers with a search space of 10^51. A-n39-k6 has 1 depots, 6 vehicles and 38 customers with a search space of 10^52. A-n44-k7 has 1 depots, 7 vehicles and 43 customers with a search space of 10^61. A-n45-k6 has 1 depots, 6 vehicles and 44 customers with a search space of 10^62. A-n45-k7 has 1 depots, 7 vehicles and 44 customers with a search space of 10^63. A-n46-k7 has 1 depots, 7 vehicles and 45 customers with a search space of 10^65. A-n48-k7 has 1 depots, 7 vehicles and 47 customers with a search space of 10^68. A-n53-k7 has 1 depots, 7 vehicles and 52 customers with a search space of 10^77. A-n54-k7 has 1 depots, 7 vehicles and 53 customers with a search space of 10^79. A-n55-k9 has 1 depots, 9 vehicles and 54 customers with a search space of 10^82. A-n60-k9 has 1 depots, 9 vehicles and 59 customers with a search space of 10^91. A-n61-k9 has 1 depots, 9 vehicles and 60 customers with a search space of 10^93. A-n62-k8 has 1 depots, 8 vehicles and 61 customers with a search space of 10^94. A-n63-k9 has 1 depots, 9 vehicles and 62 customers with a search space of 10^97. A-n63-k10 has 1 depots, 10 vehicles and 62 customers with a search space of 10^98. A-n64-k9 has 1 depots, 9 vehicles and 63 customers with a search space of 10^99. A-n65-k9 has 1 depots, 9 vehicles and 64 customers with a search space of 10^101. A-n69-k9 has 1 depots, 9 vehicles and 68 customers with a search space of 10^108. A-n80-k10 has 1 depots, 10 vehicles and 79 customers with a search space of 10^130. F-n45-k4 has 1 depots, 4 vehicles and 44 customers with a search space of 10^60. F-n72-k4 has 1 depots, 4 vehicles and 71 customers with a search space of 10^108. F-n135-k7 has 1 depots, 7 vehicles and 134 customers with a search space of 10^240.", "belgium-tw-d2-n50-k10 has 2 depots, 10 vehicles and 48 customers with a search space of 10^74. belgium-tw-d3-n100-k10 has 3 depots, 10 vehicles and 97 customers with a search space of 10^170. belgium-tw-d5-n500-k20 has 5 depots, 20 vehicles and 495 customers with a search space of 10^1168. belgium-tw-d8-n1000-k20 has 8 depots, 20 vehicles and 992 customers with a search space of 10^2607. belgium-tw-d10-n2750-k55 has 10 depots, 55 vehicles and 2740 customers with a search space of 10^8380. belgium-tw-n50-k10 has 1 depots, 10 vehicles and 49 customers with a search space of 10^74. belgium-tw-n100-k10 has 1 depots, 10 vehicles and 99 customers with a search space of 10^170. belgium-tw-n500-k20 has 1 depots, 20 vehicles and 499 customers with a search space of 10^1168. belgium-tw-n1000-k20 has 1 depots, 20 vehicles and 999 customers with a search space of 10^2607. belgium-tw-n2750-k55 has 1 depots, 55 vehicles and 2749 customers with a search space of 10^8380. Solomon_025_C101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40. Solomon_025_C201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40. Solomon_025_R101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40. Solomon_025_R201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40. Solomon_025_RC101 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40. Solomon_025_RC201 has 1 depots, 25 vehicles and 25 customers with a search space of 10^40. Solomon_100_C101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185. Solomon_100_C201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185. Solomon_100_R101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185. Solomon_100_R201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185. Solomon_100_RC101 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185. Solomon_100_RC201 has 1 depots, 25 vehicles and 100 customers with a search space of 10^185. Homberger_0200_C1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429. Homberger_0200_C2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429. Homberger_0200_R1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429. Homberger_0200_R2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429. Homberger_0200_RC1_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429. Homberger_0200_RC2_2_1 has 1 depots, 50 vehicles and 200 customers with a search space of 10^429. Homberger_0400_C1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978. Homberger_0400_C2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978. Homberger_0400_R1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978. Homberger_0400_R2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978. Homberger_0400_RC1_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978. Homberger_0400_RC2_4_1 has 1 depots, 100 vehicles and 400 customers with a search space of 10^978. Homberger_0600_C1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571. Homberger_0600_C2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571. Homberger_0600_R1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571. Homberger_0600_R2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571. Homberger_0600_RC1_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571. Homberger_0600_RC2_6_1 has 1 depots, 150 vehicles and 600 customers with a search space of 10^1571. Homberger_0800_C1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195. Homberger_0800_C2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195. Homberger_0800_R1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195. Homberger_0800_R2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195. Homberger_0800_RC1_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195. Homberger_0800_RC2_8_1 has 1 depots, 200 vehicles and 800 customers with a search space of 10^2195. Homberger_1000_C110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840. Homberger_1000_C210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840. Homberger_1000_R110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840. Homberger_1000_R210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840. Homberger_1000_RC110_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840. Homberger_1000_RC210_1 has 1 depots, 250 vehicles and 1000 customers with a search space of 10^2840.", "Schedule A-1 has 2 projects, 24 jobs, 64 execution modes, 7 resources and 150 resource requirements. Schedule A-2 has 2 projects, 44 jobs, 124 execution modes, 7 resources and 420 resource requirements. Schedule A-3 has 2 projects, 64 jobs, 184 execution modes, 7 resources and 630 resource requirements. Schedule A-4 has 5 projects, 60 jobs, 160 execution modes, 16 resources and 390 resource requirements. Schedule A-5 has 5 projects, 110 jobs, 310 execution modes, 16 resources and 900 resource requirements. Schedule A-6 has 5 projects, 160 jobs, 460 execution modes, 16 resources and 1440 resource requirements. Schedule A-7 has 10 projects, 120 jobs, 320 execution modes, 22 resources and 900 resource requirements. Schedule A-8 has 10 projects, 220 jobs, 620 execution modes, 22 resources and 1860 resource requirements. Schedule A-9 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 2880 resource requirements. Schedule A-10 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 2970 resource requirements. Schedule B-1 has 10 projects, 120 jobs, 320 execution modes, 31 resources and 900 resource requirements. Schedule B-2 has 10 projects, 220 jobs, 620 execution modes, 22 resources and 1740 resource requirements. Schedule B-3 has 10 projects, 320 jobs, 920 execution modes, 31 resources and 3060 resource requirements. Schedule B-4 has 15 projects, 180 jobs, 480 execution modes, 46 resources and 1530 resource requirements. Schedule B-5 has 15 projects, 330 jobs, 930 execution modes, 46 resources and 2760 resource requirements. Schedule B-6 has 15 projects, 480 jobs, 1380 execution modes, 46 resources and 4500 resource requirements. Schedule B-7 has 20 projects, 240 jobs, 640 execution modes, 61 resources and 1710 resource requirements. Schedule B-8 has 20 projects, 440 jobs, 1240 execution modes, 42 resources and 3180 resource requirements. Schedule B-9 has 20 projects, 640 jobs, 1840 execution modes, 61 resources and 5940 resource requirements. Schedule B-10 has 20 projects, 460 jobs, 1300 execution modes, 42 resources and 4260 resource requirements.", "24tasks-8employees has 24 tasks, 6 skills, 8 employees, 4 task types and 4 customers with a search space of 10^30. 50tasks-5employees has 50 tasks, 5 skills, 5 employees, 10 task types and 10 customers with a search space of 10^69. 100tasks-5employees has 100 tasks, 5 skills, 5 employees, 20 task types and 15 customers with a search space of 10^164. 500tasks-20employees has 500 tasks, 6 skills, 20 employees, 100 task types and 60 customers with a search space of 10^1168.", "exam_comp_set1 has 7883 students, 607 exams, 54 periods, 7 rooms, 12 period constraints and 0 room constraints with a search space of 10^1564. exam_comp_set2 has 12484 students, 870 exams, 40 periods, 49 rooms, 12 period constraints and 2 room constraints with a search space of 10^2864. exam_comp_set3 has 16365 students, 934 exams, 36 periods, 48 rooms, 168 period constraints and 15 room constraints with a search space of 10^3023. exam_comp_set4 has 4421 students, 273 exams, 21 periods, 1 rooms, 40 period constraints and 0 room constraints with a search space of 10^360. exam_comp_set5 has 8719 students, 1018 exams, 42 periods, 3 rooms, 27 period constraints and 0 room constraints with a search space of 10^2138. exam_comp_set6 has 7909 students, 242 exams, 16 periods, 8 rooms, 22 period constraints and 0 room constraints with a search space of 10^509. exam_comp_set7 has 13795 students, 1096 exams, 80 periods, 15 rooms, 28 period constraints and 0 room constraints with a search space of 10^3374. exam_comp_set8 has 7718 students, 598 exams, 80 periods, 8 rooms, 20 period constraints and 1 room constraints with a search space of 10^1678.", "toy1 has 1 skills, 3 shiftTypes, 2 patterns, 1 contracts, 6 employees, 7 shiftDates, 35 shiftAssignments and 0 requests with a search space of 10^27. toy2 has 1 skills, 3 shiftTypes, 3 patterns, 2 contracts, 20 employees, 28 shiftDates, 180 shiftAssignments and 140 requests with a search space of 10^234. sprint01 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint02 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint03 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint04 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint05 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint06 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint07 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint08 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint09 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint10 has 1 skills, 4 shiftTypes, 3 patterns, 4 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_hint01 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_hint02 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_hint03 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late01 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late02 has 1 skills, 3 shiftTypes, 4 patterns, 3 contracts, 10 employees, 28 shiftDates, 144 shiftAssignments and 139 requests with a search space of 10^144. sprint_late03 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 160 shiftAssignments and 150 requests with a search space of 10^160. sprint_late04 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 160 shiftAssignments and 150 requests with a search space of 10^160. sprint_late05 has 1 skills, 4 shiftTypes, 8 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late06 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late07 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. sprint_late08 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 0 requests with a search space of 10^152. sprint_late09 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 0 requests with a search space of 10^152. sprint_late10 has 1 skills, 4 shiftTypes, 0 patterns, 3 contracts, 10 employees, 28 shiftDates, 152 shiftAssignments and 150 requests with a search space of 10^152. medium01 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium02 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium03 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium04 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium05 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 31 employees, 28 shiftDates, 608 shiftAssignments and 403 requests with a search space of 10^906. medium_hint01 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_hint02 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_hint03 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_late01 has 1 skills, 4 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 424 shiftAssignments and 390 requests with a search space of 10^626. medium_late02 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_late03 has 1 skills, 4 shiftTypes, 0 patterns, 4 contracts, 30 employees, 28 shiftDates, 428 shiftAssignments and 390 requests with a search space of 10^632. medium_late04 has 1 skills, 4 shiftTypes, 7 patterns, 3 contracts, 30 employees, 28 shiftDates, 416 shiftAssignments and 390 requests with a search space of 10^614. medium_late05 has 2 skills, 5 shiftTypes, 7 patterns, 4 contracts, 30 employees, 28 shiftDates, 452 shiftAssignments and 390 requests with a search space of 10^667. long01 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long02 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long03 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long04 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long05 has 2 skills, 5 shiftTypes, 3 patterns, 3 contracts, 49 employees, 28 shiftDates, 740 shiftAssignments and 735 requests with a search space of 10^1250. long_hint01 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257. long_hint02 has 2 skills, 5 shiftTypes, 7 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257. long_hint03 has 2 skills, 5 shiftTypes, 7 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257. long_late01 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277. long_late02 has 2 skills, 5 shiftTypes, 9 patterns, 4 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277. long_late03 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277. long_late04 has 2 skills, 5 shiftTypes, 9 patterns, 4 contracts, 50 employees, 28 shiftDates, 752 shiftAssignments and 0 requests with a search space of 10^1277. long_late05 has 2 skills, 5 shiftTypes, 9 patterns, 3 contracts, 50 employees, 28 shiftDates, 740 shiftAssignments and 0 requests with a search space of 10^1257.", "1-nl04 has 6 days, 4 teams and 12 matches with a search space of 10^5. 1-nl06 has 10 days, 6 teams and 30 matches with a search space of 10^19. 1-nl08 has 14 days, 8 teams and 56 matches with a search space of 10^43. 1-nl10 has 18 days, 10 teams and 90 matches with a search space of 10^79. 1-nl12 has 22 days, 12 teams and 132 matches with a search space of 10^126. 1-nl14 has 26 days, 14 teams and 182 matches with a search space of 10^186. 1-nl16 has 30 days, 16 teams and 240 matches with a search space of 10^259. 2-bra24 has 46 days, 24 teams and 552 matches with a search space of 10^692. 3-nfl16 has 30 days, 16 teams and 240 matches with a search space of 10^259. 3-nfl18 has 34 days, 18 teams and 306 matches with a search space of 10^346. 3-nfl20 has 38 days, 20 teams and 380 matches with a search space of 10^447. 3-nfl22 has 42 days, 22 teams and 462 matches with a search space of 10^562. 3-nfl24 has 46 days, 24 teams and 552 matches with a search space of 10^692. 3-nfl26 has 50 days, 26 teams and 650 matches with a search space of 10^838. 3-nfl28 has 54 days, 28 teams and 756 matches with a search space of 10^999. 3-nfl30 has 58 days, 30 teams and 870 matches with a search space of 10^1175. 3-nfl32 has 62 days, 32 teams and 992 matches with a search space of 10^1367. 4-super04 has 6 days, 4 teams and 12 matches with a search space of 10^5. 4-super06 has 10 days, 6 teams and 30 matches with a search space of 10^19. 4-super08 has 14 days, 8 teams and 56 matches with a search space of 10^43. 4-super10 has 18 days, 10 teams and 90 matches with a search space of 10^79. 4-super12 has 22 days, 12 teams and 132 matches with a search space of 10^126. 4-super14 has 26 days, 14 teams and 182 matches with a search space of 10^186. 5-galaxy04 has 6 days, 4 teams and 12 matches with a search space of 10^5. 5-galaxy06 has 10 days, 6 teams and 30 matches with a search space of 10^19. 5-galaxy08 has 14 days, 8 teams and 56 matches with a search space of 10^43. 5-galaxy10 has 18 days, 10 teams and 90 matches with a search space of 10^79. 5-galaxy12 has 22 days, 12 teams and 132 matches with a search space of 10^126. 5-galaxy14 has 26 days, 14 teams and 182 matches with a search space of 10^186. 5-galaxy16 has 30 days, 16 teams and 240 matches with a search space of 10^259. 5-galaxy18 has 34 days, 18 teams and 306 matches with a search space of 10^346. 5-galaxy20 has 38 days, 20 teams and 380 matches with a search space of 10^447. 5-galaxy22 has 42 days, 22 teams and 462 matches with a search space of 10^562. 5-galaxy24 has 46 days, 24 teams and 552 matches with a search space of 10^692. 5-galaxy26 has 50 days, 26 teams and 650 matches with a search space of 10^838. 5-galaxy28 has 54 days, 28 teams and 756 matches with a search space of 10^999. 5-galaxy30 has 58 days, 30 teams and 870 matches with a search space of 10^1175. 5-galaxy32 has 62 days, 32 teams and 992 matches with a search space of 10^1367. 5-galaxy34 has 66 days, 34 teams and 1122 matches with a search space of 10^1576. 5-galaxy36 has 70 days, 36 teams and 1260 matches with a search space of 10^1801. 5-galaxy38 has 74 days, 38 teams and 1406 matches with a search space of 10^2042. 5-galaxy40 has 78 days, 40 teams and 1560 matches with a search space of 10^2301.", "sample01 has 3 resources, 2 machines, 288 periods and 25 tasks with a search space of 10^53. sample02 has 3 resources, 2 machines, 288 periods and 50 tasks with a search space of 10^114. sample03 has 3 resources, 2 machines, 288 periods and 100 tasks with a search space of 10^226. sample04 has 3 resources, 5 machines, 288 periods and 100 tasks with a search space of 10^266. sample05 has 3 resources, 2 machines, 288 periods and 250 tasks with a search space of 10^584. sample06 has 3 resources, 5 machines, 288 periods and 250 tasks with a search space of 10^673. sample07 has 3 resources, 2 machines, 288 periods and 1000 tasks with a search space of 10^2388. sample08 has 3 resources, 5 machines, 288 periods and 1000 tasks with a search space of 10^2748. sample09 has 4 resources, 20 machines, 288 periods and 2000 tasks with a search space of 10^6668. instance00 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^595. instance01 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^599. instance02 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^599. instance03 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^591. instance04 has 1 resources, 10 machines, 288 periods and 200 tasks with a search space of 10^590. instance05 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^667. instance06 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^660. instance07 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^662. instance08 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^651. instance09 has 2 resources, 25 machines, 288 periods and 200 tasks with a search space of 10^659. instance10 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1657. instance11 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1644. instance12 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1637. instance13 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1659. instance14 has 2 resources, 20 machines, 288 periods and 500 tasks with a search space of 10^1643. instance15 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1782. instance16 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1778. instance17 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1764. instance18 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1769. instance19 has 3 resources, 40 machines, 288 periods and 500 tasks with a search space of 10^1778. instance20 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3689. instance21 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3678. instance22 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3706. instance23 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3676. instance24 has 3 resources, 50 machines, 288 periods and 1000 tasks with a search space of 10^3681. instance25 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3774. instance26 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3737. instance27 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3744. instance28 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3731. instance29 has 3 resources, 60 machines, 288 periods and 1000 tasks with a search space of 10^3746. instance30 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7718. instance31 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7740. instance32 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7686. instance33 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7672. instance34 has 4 resources, 70 machines, 288 periods and 2000 tasks with a search space of 10^7695. instance35 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7807. instance36 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7814. instance37 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7764. instance38 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7736. instance39 has 4 resources, 80 machines, 288 periods and 2000 tasks with a search space of 10^7783. instance40 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15976. instance41 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15935. instance42 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15887. instance43 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15896. instance44 has 4 resources, 90 machines, 288 periods and 4000 tasks with a search space of 10^15885. instance45 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20173. instance46 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20132. instance47 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20126. instance48 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20110. instance49 has 4 resources, 100 machines, 288 periods and 5000 tasks with a search space of 10^20078.", "de_smet_1 has 1 regions, 3 sectors and 11 asset classes with a search space of 10^4. irrinki_1 has 2 regions, 3 sectors and 6 asset classes with a search space of 10^3.", "18talks-6timeslots-5rooms has 18 talks, 6 timeslots and 5 rooms with a search space of 10^26. 36talks-12timeslots-5rooms has 36 talks, 12 timeslots and 5 rooms with a search space of 10^64. 72talks-12timeslots-10rooms has 72 talks, 12 timeslots and 10 rooms with a search space of 10^149. 108talks-18timeslots-10rooms has 108 talks, 18 timeslots and 10 rooms with a search space of 10^243. 216talks-18timeslots-20rooms has 216 talks, 18 timeslots and 20 rooms with a search space of 10^552.", "47shows has 47 shows with a search space of 10^59.", "175flights-7days-Europe has 2 skills, 50 airports, 150 employees, 175 flights and 875 flight assignments with a search space of 10^1904. 700flights-28days-Europe has 2 skills, 50 airports, 150 employees, 700 flights and 3500 flight assignments with a search space of 10^7616. 875flights-7days-Europe has 2 skills, 50 airports, 750 employees, 875 flights and 4375 flight assignments with a search space of 10^12578. 175flights-7days-US has 2 skills, 48 airports, 150 employees, 175 flights and 875 flight assignments with a search space of 10^1904." ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/optimizer-about-optimizer-con_getting-started-optaplanner
Introduction to ROSA
Introduction to ROSA Red Hat OpenShift Service on AWS 4 An overview of Red Hat OpenShift Service on AWS architecture Red Hat OpenShift Documentation Team
[ "Resources: ConfigMap: - namespace: openshift-config name: rosa-brand-logo - namespace: openshift-console name: custom-logo - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-config - namespace: openshift-file-integrity name: fr-aide-conf - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-config - namespace: openshift-monitoring name: cluster-monitoring-config - namespace: openshift-monitoring name: managed-namespaces - namespace: openshift-monitoring name: ocp-namespaces - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-monitoring name: sre-dns-latency-exporter-code - namespace: openshift-monitoring name: sre-dns-latency-exporter-trusted-ca-bundle - namespace: openshift-monitoring name: sre-ebs-iops-reporter-code - namespace: openshift-monitoring name: sre-ebs-iops-reporter-trusted-ca-bundle - namespace: openshift-monitoring name: sre-stuck-ebs-vols-code - namespace: openshift-monitoring name: sre-stuck-ebs-vols-trusted-ca-bundle - namespace: openshift-security name: osd-audit-policy - namespace: openshift-validation-webhook name: webhook-cert - namespace: openshift name: motd Endpoints: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-scanning name: loggerservice - namespace: openshift-security name: audit-exporter - namespace: openshift-validation-webhook name: validation-webhook Namespace: - name: dedicated-admin - name: openshift-addon-operator - name: openshift-aqua - name: openshift-aws-vpce-operator - name: openshift-backplane - name: openshift-backplane-cee - name: openshift-backplane-csa - name: openshift-backplane-cse - name: openshift-backplane-csm - name: openshift-backplane-managed-scripts - name: openshift-backplane-mobb - name: openshift-backplane-srep - name: openshift-backplane-tam - name: openshift-cloud-ingress-operator - name: openshift-codeready-workspaces - name: openshift-compliance - name: openshift-compliance-monkey - name: openshift-container-security - name: openshift-custom-domains-operator - name: openshift-customer-monitoring - name: openshift-deployment-validation-operator - name: openshift-managed-node-metadata-operator - name: openshift-file-integrity - name: openshift-logging - name: openshift-managed-upgrade-operator - name: openshift-must-gather-operator - name: openshift-observability-operator - name: openshift-ocm-agent-operator - name: openshift-operators-redhat - name: openshift-osd-metrics - name: openshift-rbac-permissions - name: openshift-route-monitor-operator - name: openshift-scanning - name: openshift-security - name: openshift-splunk-forwarder-operator - name: openshift-sre-pruning - name: openshift-suricata - name: openshift-validation-webhook - name: openshift-velero - name: openshift-monitoring - name: openshift - name: openshift-cluster-version - name: keycloak - name: goalert - name: configure-goalert-operator ReplicationController: - namespace: openshift-monitoring name: sre-ebs-iops-reporter-1 - namespace: openshift-monitoring name: sre-stuck-ebs-vols-1 Secret: - namespace: openshift-authentication name: v4-0-config-user-idp-0-file-data - namespace: openshift-authentication name: v4-0-config-user-template-error - namespace: openshift-authentication name: v4-0-config-user-template-login - namespace: openshift-authentication name: v4-0-config-user-template-provider-selection - namespace: openshift-config name: htpasswd-secret - namespace: openshift-config name: osd-oauth-templates-errors - namespace: openshift-config name: osd-oauth-templates-login - namespace: openshift-config name: osd-oauth-templates-providers - namespace: openshift-config name: rosa-oauth-templates-errors - namespace: openshift-config name: rosa-oauth-templates-login - namespace: openshift-config name: rosa-oauth-templates-providers - namespace: openshift-config name: support - namespace: openshift-config name: tony-devlab-primary-cert-bundle-secret - namespace: openshift-ingress name: tony-devlab-primary-cert-bundle-secret - namespace: openshift-kube-apiserver name: user-serving-cert-000 - namespace: openshift-kube-apiserver name: user-serving-cert-001 - namespace: openshift-monitoring name: dms-secret - namespace: openshift-monitoring name: observatorium-credentials - namespace: openshift-monitoring name: pd-secret - namespace: openshift-scanning name: clam-secrets - namespace: openshift-scanning name: logger-secrets - namespace: openshift-security name: splunk-auth ServiceAccount: - namespace: openshift-backplane-managed-scripts name: osd-backplane - namespace: openshift-backplane-srep name: 6804d07fb268b8285b023bcf65392f0e - namespace: openshift-backplane-srep name: osd-delete-ownerrefs-serviceaccounts - namespace: openshift-backplane name: osd-delete-backplane-serviceaccounts - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: osd-patch-subscription-source - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator - namespace: openshift-sre-pruning name: bz1980755 - namespace: openshift-scanning name: logger-sa - namespace: openshift-scanning name: scanner-sa - namespace: openshift-sre-pruning name: sre-pruner-sa - namespace: openshift-suricata name: suricata-sa - namespace: openshift-validation-webhook name: validation-webhook - namespace: openshift-velero name: managed-velero-operator - namespace: openshift-velero name: velero - namespace: openshift-backplane-srep name: UNIQUE_BACKPLANE_SERVICEACCOUNT_ID Service: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-scanning name: loggerservice - namespace: openshift-security name: audit-exporter - namespace: openshift-validation-webhook name: validation-webhook AddonOperator: - name: addon-operator ValidatingWebhookConfiguration: - name: sre-hiveownership-validation - name: sre-namespace-validation - name: sre-pod-validation - name: sre-prometheusrule-validation - name: sre-regular-user-validation - name: sre-scc-validation - name: sre-techpreviewnoupgrade-validation DaemonSet: - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-scanning name: logger - namespace: openshift-scanning name: scanner - namespace: openshift-security name: audit-exporter - namespace: openshift-suricata name: suricata - namespace: openshift-validation-webhook name: validation-webhook DeploymentConfig: - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols ClusterRoleBinding: - name: aqua-scanner-binding - name: backplane-cluster-admin - name: backplane-impersonate-cluster-admin - name: bz1980755 - name: configure-alertmanager-operator-prom - name: dedicated-admins-cluster - name: dedicated-admins-registry-cas-cluster - name: logger-clusterrolebinding - name: openshift-backplane-managed-scripts-reader - name: osd-cluster-admin - name: osd-cluster-ready - name: osd-delete-backplane-script-resources - name: osd-delete-ownerrefs-serviceaccounts - name: osd-patch-subscription-source - name: osd-rebalance-infra-nodes - name: pcap-dedicated-admins - name: splunk-forwarder-operator - name: splunk-forwarder-operator-clusterrolebinding - name: sre-pod-network-connectivity-check-pruner - name: sre-pruner-buildsdeploys-pruning - name: velero - name: webhook-validation ClusterRole: - name: backplane-cee-readers-cluster - name: backplane-impersonate-cluster-admin - name: backplane-readers-cluster - name: backplane-srep-admins-cluster - name: backplane-srep-admins-project - name: bz1980755 - name: dedicated-admins-aggregate-cluster - name: dedicated-admins-aggregate-project - name: dedicated-admins-cluster - name: dedicated-admins-manage-operators - name: dedicated-admins-project - name: dedicated-admins-registry-cas-cluster - name: dedicated-readers - name: image-scanner - name: logger-clusterrole - name: openshift-backplane-managed-scripts-reader - name: openshift-splunk-forwarder-operator - name: osd-cluster-ready - name: osd-custom-domains-dedicated-admin-cluster - name: osd-delete-backplane-script-resources - name: osd-delete-backplane-serviceaccounts - name: osd-delete-ownerrefs-serviceaccounts - name: osd-get-namespace - name: osd-netnamespaces-dedicated-admin-cluster - name: osd-patch-subscription-source - name: osd-readers-aggregate - name: osd-rebalance-infra-nodes - name: osd-rebalance-infra-nodes-openshift-pod-rebalance - name: pcap-dedicated-admins - name: splunk-forwarder-operator - name: sre-allow-read-machine-info - name: sre-pruner-buildsdeploys-cr - name: webhook-validation-cr RoleBinding: - namespace: kube-system name: cloud-ingress-operator-cluster-config-v1-reader - namespace: kube-system name: managed-velero-operator-cluster-config-v1-reader - namespace: openshift-aqua name: dedicated-admins-openshift-aqua - namespace: openshift-backplane-managed-scripts name: backplane-cee-mustgather - namespace: openshift-backplane-managed-scripts name: backplane-srep-mustgather - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-cloud-ingress-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-codeready-workspaces name: dedicated-admins-openshift-codeready-workspaces - namespace: openshift-config name: dedicated-admins-project-request - namespace: openshift-config name: dedicated-admins-registry-cas-project - namespace: openshift-config name: muo-pullsecret-reader - namespace: openshift-config name: oao-openshiftconfig-reader - namespace: openshift-config name: osd-cluster-ready - namespace: openshift-custom-domains-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-customer-monitoring name: dedicated-admins-openshift-customer-monitoring - namespace: openshift-customer-monitoring name: prometheus-k8s-openshift-customer-monitoring - namespace: openshift-dns name: dedicated-admins-openshift-dns - namespace: openshift-dns name: osd-rebalance-infra-nodes-openshift-dns - namespace: openshift-image-registry name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-ingress-operator name: cloud-ingress-operator - namespace: openshift-ingress name: cloud-ingress-operator - namespace: openshift-kube-apiserver name: cloud-ingress-operator - namespace: openshift-machine-api name: cloud-ingress-operator - namespace: openshift-logging name: admin-dedicated-admins - namespace: openshift-logging name: admin-system:serviceaccounts:dedicated-admin - namespace: openshift-logging name: openshift-logging-dedicated-admins - namespace: openshift-logging name: openshift-logging:serviceaccounts:dedicated-admin - namespace: openshift-machine-api name: osd-cluster-ready - namespace: openshift-machine-api name: sre-ebs-iops-reporter-read-machine-info - namespace: openshift-machine-api name: sre-stuck-ebs-vols-read-machine-info - namespace: openshift-managed-node-metadata-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: dedicated-admins-openshift-marketplace - namespace: openshift-monitoring name: backplane-cee - namespace: openshift-monitoring name: muo-monitoring-reader - namespace: openshift-monitoring name: oao-monitoring-manager - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-monitoring - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-must-gather-operator name: backplane-cee-mustgather - namespace: openshift-must-gather-operator name: backplane-srep-mustgather - namespace: openshift-must-gather-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-network-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-ocm-agent-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-operators-redhat name: admin-dedicated-admins - namespace: openshift-operators-redhat name: admin-system:serviceaccounts:dedicated-admin - namespace: openshift-operators-redhat name: openshift-operators-redhat-dedicated-admins - namespace: openshift-operators-redhat name: openshift-operators-redhat:serviceaccounts:dedicated-admin - namespace: openshift-operators name: dedicated-admins-openshift-operators - namespace: openshift-osd-metrics name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-osd-metrics name: prometheus-k8s - namespace: openshift-rbac-permissions name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-rbac-permissions name: prometheus-k8s - namespace: openshift-route-monitor-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-scanning name: scanner-rolebinding - namespace: openshift-security name: osd-rebalance-infra-nodes-openshift-security - namespace: openshift-security name: prometheus-k8s - namespace: openshift-splunk-forwarder-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-suricata name: suricata-rolebinding - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-config-create - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-config-edit - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-managed-am-secret - namespace: openshift-user-workload-monitoring name: osd-rebalance-infra-nodes-openshift-user-workload-monitoring - namespace: openshift-velero name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-velero name: prometheus-k8s Role: - namespace: kube-system name: cluster-config-v1-reader - namespace: kube-system name: cluster-config-v1-reader-cio - namespace: openshift-aqua name: dedicated-admins-openshift-aqua - namespace: openshift-backplane-managed-scripts name: backplane-cee-pcap-collector - namespace: openshift-backplane-managed-scripts name: backplane-srep-pcap-collector - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-codeready-workspaces name: dedicated-admins-openshift-codeready-workspaces - namespace: openshift-config name: dedicated-admins-project-request - namespace: openshift-config name: dedicated-admins-registry-cas-project - namespace: openshift-config name: muo-pullsecret-reader - namespace: openshift-config name: oao-openshiftconfig-reader - namespace: openshift-config name: osd-cluster-ready - namespace: openshift-customer-monitoring name: dedicated-admins-openshift-customer-monitoring - namespace: openshift-customer-monitoring name: prometheus-k8s-openshift-customer-monitoring - namespace: openshift-dns name: dedicated-admins-openshift-dns - namespace: openshift-dns name: osd-rebalance-infra-nodes-openshift-dns - namespace: openshift-ingress-operator name: cloud-ingress-operator - namespace: openshift-ingress name: cloud-ingress-operator - namespace: openshift-kube-apiserver name: cloud-ingress-operator - namespace: openshift-machine-api name: cloud-ingress-operator - namespace: openshift-logging name: dedicated-admins-openshift-logging - namespace: openshift-machine-api name: osd-cluster-ready - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: dedicated-admins-openshift-marketplace - namespace: openshift-monitoring name: backplane-cee - namespace: openshift-monitoring name: muo-monitoring-reader - namespace: openshift-monitoring name: oao-monitoring-manager - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-monitoring - namespace: openshift-must-gather-operator name: backplane-cee-mustgather - namespace: openshift-must-gather-operator name: backplane-srep-mustgather - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-operators name: dedicated-admins-openshift-operators - namespace: openshift-osd-metrics name: prometheus-k8s - namespace: openshift-rbac-permissions name: prometheus-k8s - namespace: openshift-scanning name: scanner-role - namespace: openshift-security name: osd-rebalance-infra-nodes-openshift-security - namespace: openshift-security name: prometheus-k8s - namespace: openshift-suricata name: suricata-role - namespace: openshift-user-workload-monitoring name: dedicated-admins-user-workload-monitoring-create-cm - namespace: openshift-user-workload-monitoring name: dedicated-admins-user-workload-monitoring-manage-am-secret - namespace: openshift-user-workload-monitoring name: osd-rebalance-infra-nodes-openshift-user-workload-monitoring - namespace: openshift-velero name: prometheus-k8s CronJob: - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-backplane-srep name: osd-delete-ownerrefs-serviceaccounts - namespace: openshift-backplane name: osd-delete-backplane-serviceaccounts - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: osd-patch-subscription-source - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-sre-pruning name: builds-pruner - namespace: openshift-sre-pruning name: bz1980755 - namespace: openshift-sre-pruning name: deployments-pruner Job: - namespace: openshift-monitoring name: osd-cluster-ready CredentialsRequest: - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-credentials-aws - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-credentials-gcp - namespace: openshift-monitoring name: sre-ebs-iops-reporter-aws-credentials - namespace: openshift-monitoring name: sre-stuck-ebs-vols-aws-credentials - namespace: openshift-velero name: managed-velero-operator-iam-credentials-aws - namespace: openshift-velero name: managed-velero-operator-iam-credentials-gcp APIScheme: - namespace: openshift-cloud-ingress-operator name: rh-api PublishingStrategy: - namespace: openshift-cloud-ingress-operator name: publishingstrategy ScanSettingBinding: - namespace: openshift-compliance name: fedramp-high-ocp - namespace: openshift-compliance name: fedramp-high-rhcos ScanSetting: - namespace: openshift-compliance name: osd TailoredProfile: - namespace: openshift-compliance name: rhcos4-high-rosa OAuth: - name: cluster EndpointSlice: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics-rhtwg - namespace: openshift-monitoring name: sre-dns-latency-exporter-4cw9r - namespace: openshift-monitoring name: sre-ebs-iops-reporter-6tx5g - namespace: openshift-monitoring name: sre-stuck-ebs-vols-gmdhs - namespace: openshift-scanning name: loggerservice-zprbq - namespace: openshift-security name: audit-exporter-nqfdk - namespace: openshift-validation-webhook name: validation-webhook-97b8t FileIntegrity: - namespace: openshift-file-integrity name: osd-fileintegrity MachineHealthCheck: - namespace: openshift-machine-api name: srep-infra-healthcheck - namespace: openshift-machine-api name: srep-metal-worker-healthcheck - namespace: openshift-machine-api name: srep-worker-healthcheck MachineSet: - namespace: openshift-machine-api name: sbasabat-mc-qhqkn-infra-us-east-1a - namespace: openshift-machine-api name: sbasabat-mc-qhqkn-worker-us-east-1a ContainerRuntimeConfig: - name: custom-crio KubeletConfig: - name: custom-kubelet MachineConfig: - name: 00-master-chrony - name: 00-worker-chrony SubjectPermission: - namespace: openshift-rbac-permissions name: backplane-cee - namespace: openshift-rbac-permissions name: backplane-csa - namespace: openshift-rbac-permissions name: backplane-cse - namespace: openshift-rbac-permissions name: backplane-csm - namespace: openshift-rbac-permissions name: backplane-mobb - namespace: openshift-rbac-permissions name: backplane-srep - namespace: openshift-rbac-permissions name: backplane-tam - namespace: openshift-rbac-permissions name: dedicated-admin-serviceaccounts - namespace: openshift-rbac-permissions name: dedicated-admin-serviceaccounts-core-ns - namespace: openshift-rbac-permissions name: dedicated-admins - namespace: openshift-rbac-permissions name: dedicated-admins-alert-routing-edit - namespace: openshift-rbac-permissions name: dedicated-admins-core-ns - namespace: openshift-rbac-permissions name: dedicated-admins-customer-monitoring - namespace: openshift-rbac-permissions name: osd-delete-backplane-serviceaccounts VeleroInstall: - namespace: openshift-velero name: cluster PrometheusRule: - namespace: openshift-monitoring name: rhmi-sre-cluster-admins - namespace: openshift-monitoring name: rhoam-sre-cluster-admins - namespace: openshift-monitoring name: sre-alertmanager-silences-active - namespace: openshift-monitoring name: sre-alerts-stuck-builds - namespace: openshift-monitoring name: sre-alerts-stuck-volumes - namespace: openshift-monitoring name: sre-cloud-ingress-operator-offline-alerts - namespace: openshift-monitoring name: sre-avo-pendingacceptance - namespace: openshift-monitoring name: sre-configure-alertmanager-operator-offline-alerts - namespace: openshift-monitoring name: sre-control-plane-resizing-alerts - namespace: openshift-monitoring name: sre-dns-alerts - namespace: openshift-monitoring name: sre-ebs-iops-burstbalance - namespace: openshift-monitoring name: sre-elasticsearch-jobs - namespace: openshift-monitoring name: sre-elasticsearch-managed-notification-alerts - namespace: openshift-monitoring name: sre-excessive-memory - namespace: openshift-monitoring name: sre-fr-alerts-low-disk-space - namespace: openshift-monitoring name: sre-haproxy-reload-fail - namespace: openshift-monitoring name: sre-internal-slo-recording-rules - namespace: openshift-monitoring name: sre-kubequotaexceeded - namespace: openshift-monitoring name: sre-leader-election-master-status-alerts - namespace: openshift-monitoring name: sre-managed-kube-apiserver-missing-on-node - namespace: openshift-monitoring name: sre-managed-kube-controller-manager-missing-on-node - namespace: openshift-monitoring name: sre-managed-kube-scheduler-missing-on-node - namespace: openshift-monitoring name: sre-managed-node-metadata-operator-alerts - namespace: openshift-monitoring name: sre-managed-notification-alerts - namespace: openshift-monitoring name: sre-managed-upgrade-operator-alerts - namespace: openshift-monitoring name: sre-managed-velero-operator-alerts - namespace: openshift-monitoring name: sre-node-unschedulable - namespace: openshift-monitoring name: sre-oauth-server - namespace: openshift-monitoring name: sre-pending-csr-alert - namespace: openshift-monitoring name: sre-proxy-managed-notification-alerts - namespace: openshift-monitoring name: sre-pruning - namespace: openshift-monitoring name: sre-pv - namespace: openshift-monitoring name: sre-router-health - namespace: openshift-monitoring name: sre-runaway-sdn-preventing-container-creation - namespace: openshift-monitoring name: sre-slo-recording-rules - namespace: openshift-monitoring name: sre-telemeter-client - namespace: openshift-monitoring name: sre-telemetry-managed-labels-recording-rules - namespace: openshift-monitoring name: sre-upgrade-send-managed-notification-alerts - namespace: openshift-monitoring name: sre-uptime-sla ServiceMonitor: - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols ClusterUrlMonitor: - namespace: openshift-route-monitor-operator name: api RouteMonitor: - namespace: openshift-route-monitor-operator name: console NetworkPolicy: - namespace: openshift-deployment-validation-operator name: allow-from-openshift-insights - namespace: openshift-deployment-validation-operator name: allow-from-openshift-olm ManagedNotification: - namespace: openshift-ocm-agent-operator name: sre-elasticsearch-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-proxy-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-upgrade-managed-notifications OcmAgent: - namespace: openshift-ocm-agent-operator name: ocmagent - namespace: openshift-security name: audit-exporter Console: - name: cluster CatalogSource: - namespace: openshift-addon-operator name: addon-operator-catalog - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-registry - namespace: openshift-compliance name: compliance-operator-registry - namespace: openshift-container-security name: container-security-operator-registry - namespace: openshift-custom-domains-operator name: custom-domains-operator-registry - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-catalog - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator-registry - namespace: openshift-file-integrity name: file-integrity-operator-registry - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-catalog - namespace: openshift-monitoring name: configure-alertmanager-operator-registry - namespace: openshift-must-gather-operator name: must-gather-operator-registry - namespace: openshift-observability-operator name: observability-operator-catalog - namespace: openshift-ocm-agent-operator name: ocm-agent-operator-registry - namespace: openshift-osd-metrics name: osd-metrics-exporter-registry - namespace: openshift-rbac-permissions name: rbac-permissions-operator-registry - namespace: openshift-route-monitor-operator name: route-monitor-operator-registry - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator-catalog - namespace: openshift-velero name: managed-velero-operator-registry OperatorGroup: - namespace: openshift-addon-operator name: addon-operator-og - namespace: openshift-aqua name: openshift-aqua - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-codeready-workspaces name: openshift-codeready-workspaces - namespace: openshift-compliance name: compliance-operator - namespace: openshift-container-security name: container-security-operator - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-customer-monitoring name: openshift-customer-monitoring - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-og - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-file-integrity name: file-integrity-operator - namespace: openshift-logging name: openshift-logging - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-og - namespace: openshift-must-gather-operator name: must-gather-operator - namespace: openshift-observability-operator name: observability-operator-og - namespace: openshift-ocm-agent-operator name: ocm-agent-operator-og - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator-og - namespace: openshift-velero name: managed-velero-operator Subscription: - namespace: openshift-addon-operator name: addon-operator - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-compliance name: compliance-operator-sub - namespace: openshift-container-security name: container-security-operator-sub - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-deployment-validation-operator name: deployment-validation-operator - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-file-integrity name: file-integrity-operator-sub - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-must-gather-operator name: must-gather-operator - namespace: openshift-observability-operator name: observability-operator - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-splunk-forwarder-operator name: openshift-splunk-forwarder-operator - namespace: openshift-velero name: managed-velero-operator PackageManifest: - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator - namespace: openshift-addon-operator name: addon-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-velero name: managed-velero-operator - namespace: openshift-deployment-validation-operator name: managed-upgrade-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-container-security name: container-security-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-file-integrity name: file-integrity-operator - namespace: openshift-custom-domains-operator name: managed-node-metadata-operator - namespace: openshift-route-monitor-operator name: custom-domains-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-observability-operator name: observability-operator - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-must-gather-operator name: deployment-validation-operator - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-compliance name: compliance-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator Status: - {} Project: - name: dedicated-admin - name: openshift-addon-operator - name: openshift-aqua - name: openshift-backplane - name: openshift-backplane-cee - name: openshift-backplane-csa - name: openshift-backplane-cse - name: openshift-backplane-csm - name: openshift-backplane-managed-scripts - name: openshift-backplane-mobb - name: openshift-backplane-srep - name: openshift-backplane-tam - name: openshift-cloud-ingress-operator - name: openshift-codeready-workspaces - name: openshift-compliance - name: openshift-container-security - name: openshift-custom-domains-operator - name: openshift-customer-monitoring - name: openshift-deployment-validation-operator - name: openshift-managed-node-metadata-operator - name: openshift-file-integrity - name: openshift-logging - name: openshift-managed-upgrade-operator - name: openshift-must-gather-operator - name: openshift-observability-operator - name: openshift-ocm-agent-operator - name: openshift-operators-redhat - name: openshift-osd-metrics - name: openshift-rbac-permissions - name: openshift-route-monitor-operator - name: openshift-scanning - name: openshift-security - name: openshift-splunk-forwarder-operator - name: openshift-sre-pruning - name: openshift-suricata - name: openshift-validation-webhook - name: openshift-velero ClusterResourceQuota: - name: loadbalancer-quota - name: persistent-volume-quota SecurityContextConstraints: - name: osd-scanning-scc - name: osd-suricata-scc - name: pcap-dedicated-admins - name: splunkforwarder SplunkForwarder: - namespace: openshift-security name: splunkforwarder Group: - name: cluster-admins - name: dedicated-admins User: - name: backplane-cluster-admin Backup: - namespace: openshift-velero name: daily-full-backup-20221123112305 - namespace: openshift-velero name: daily-full-backup-20221125042537 - namespace: openshift-velero name: daily-full-backup-20221126010038 - namespace: openshift-velero name: daily-full-backup-20221127010039 - namespace: openshift-velero name: daily-full-backup-20221128010040 - namespace: openshift-velero name: daily-full-backup-20221129050847 - namespace: openshift-velero name: hourly-object-backup-20221128051740 - namespace: openshift-velero name: hourly-object-backup-20221128061740 - namespace: openshift-velero name: hourly-object-backup-20221128071740 - namespace: openshift-velero name: hourly-object-backup-20221128081740 - namespace: openshift-velero name: hourly-object-backup-20221128091740 - namespace: openshift-velero name: hourly-object-backup-20221129050852 - namespace: openshift-velero name: hourly-object-backup-20221129051747 - namespace: openshift-velero name: weekly-full-backup-20221116184315 - namespace: openshift-velero name: weekly-full-backup-20221121033854 - namespace: openshift-velero name: weekly-full-backup-20221128020040 Schedule: - namespace: openshift-velero name: daily-full-backup - namespace: openshift-velero name: hourly-object-backup - namespace: openshift-velero name: weekly-full-backup", "apiVersion: v1 kind: ConfigMap metadata: name: ocp-namespaces namespace: openshift-monitoring data: managed_namespaces.yaml: | Resources: Namespace: - name: kube-system - name: openshift-apiserver - name: openshift-apiserver-operator - name: openshift-authentication - name: openshift-authentication-operator - name: openshift-cloud-controller-manager - name: openshift-cloud-controller-manager-operator - name: openshift-cloud-credential-operator - name: openshift-cloud-network-config-controller - name: openshift-cluster-api - name: openshift-cluster-csi-drivers - name: openshift-cluster-machine-approver - name: openshift-cluster-node-tuning-operator - name: openshift-cluster-samples-operator - name: openshift-cluster-storage-operator - name: openshift-config - name: openshift-config-managed - name: openshift-config-operator - name: openshift-console - name: openshift-console-operator - name: openshift-console-user-settings - name: openshift-controller-manager - name: openshift-controller-manager-operator - name: openshift-dns - name: openshift-dns-operator - name: openshift-etcd - name: openshift-etcd-operator - name: openshift-host-network - name: openshift-image-registry - name: openshift-ingress - name: openshift-ingress-canary - name: openshift-ingress-operator - name: openshift-insights - name: openshift-kni-infra - name: openshift-kube-apiserver - name: openshift-kube-apiserver-operator - name: openshift-kube-controller-manager - name: openshift-kube-controller-manager-operator - name: openshift-kube-scheduler - name: openshift-kube-scheduler-operator - name: openshift-kube-storage-version-migrator - name: openshift-kube-storage-version-migrator-operator - name: openshift-machine-api - name: openshift-machine-config-operator - name: openshift-marketplace - name: openshift-monitoring - name: openshift-multus - name: openshift-network-diagnostics - name: openshift-network-operator - name: openshift-nutanix-infra - name: openshift-oauth-apiserver - name: openshift-openstack-infra - name: openshift-operator-lifecycle-manager - name: openshift-operators - name: openshift-ovirt-infra - name: openshift-sdn - name: openshift-ovn-kubernetes - name: openshift-platform-operators - name: openshift-route-controller-manager - name: openshift-service-ca - name: openshift-service-ca-operator - name: openshift-user-workload-monitoring - name: openshift-vsphere-infra", "addon-namespaces: ocs-converged-dev: openshift-storage managed-api-service-internal: redhat-rhoami-operator codeready-workspaces-operator: codeready-workspaces-operator managed-odh: redhat-ods-operator codeready-workspaces-operator-qe: codeready-workspaces-operator-qe integreatly-operator: redhat-rhmi-operator nvidia-gpu-addon: redhat-nvidia-gpu-addon integreatly-operator-internal: redhat-rhmi-operator rhoams: redhat-rhoam-operator ocs-converged: openshift-storage addon-operator: redhat-addon-operator prow-operator: prow cluster-logging-operator: openshift-logging advanced-cluster-management: redhat-open-cluster-management cert-manager-operator: redhat-cert-manager-operator dba-operator: addon-dba-operator reference-addon: redhat-reference-addon ocm-addon-test-operator: redhat-ocm-addon-test-operator", "[ { \"webhookName\": \"clusterlogging-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"logging.openshift.io\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"clusterloggings\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may set log retention outside the allowed range of 0-7 days\" }, { \"webhookName\": \"clusterrolebindings-validation\", \"rules\": [ { \"operations\": [ \"DELETE\" ], \"apiGroups\": [ \"rbac.authorization.k8s.io\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"clusterrolebindings\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not delete the cluster role bindings under the managed namespaces: (^openshift-.*|kube-system)\" }, { \"webhookName\": \"customresourcedefinitions-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"apiextensions.k8s.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"customresourcedefinitions\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not change CustomResourceDefinitions managed by Red Hat.\" }, { \"webhookName\": \"hiveownership-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"quota.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterresourcequotas\" ], \"scope\": \"Cluster\" } ], \"webhookObjectSelector\": { \"matchLabels\": { \"hive.openshift.io/managed\": \"true\" } }, \"documentString\": \"Managed OpenShift customers may not edit certain managed resources. A managed resource has a \\\"hive.openshift.io/managed\\\": \\\"true\\\" label.\" }, { \"webhookName\": \"imagecontentpolicies-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"imagedigestmirrorsets\", \"imagetagmirrorsets\" ], \"scope\": \"Cluster\" }, { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"imagecontentsourcepolicies\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not create ImageContentSourcePolicy, ImageDigestMirrorSet, or ImageTagMirrorSet resources that configure mirrors that would conflict with system registries (e.g. quay.io, registry.redhat.io, registry.access.redhat.com, etc). For more details, see https://docs.openshift.com/\" }, { \"webhookName\": \"ingress-config-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"ingresses\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not modify ingress config resources because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring.\" }, { \"webhookName\": \"ingresscontroller-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"ingresscontroller\", \"ingresscontrollers\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customer may create IngressControllers without necessary taints. This can cause those workloads to be provisioned on infra or master nodes.\" }, { \"webhookName\": \"namespace-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"namespaces\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not modify namespaces specified in the [openshift-monitoring/managed-namespaces openshift-monitoring/ocp-namespaces] ConfigMaps because customer workloads should be placed in customer-created namespaces. Customers may not create namespaces identified by this regular expression (^comUSD|^ioUSD|^inUSD) because it could interfere with critical DNS resolution. Additionally, customers may not set or change the values of these Namespace labels [managed.openshift.io/storage-pv-quota-exempt managed.openshift.io/service-lb-quota-exempt].\" }, { \"webhookName\": \"networkpolicies-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"networking.k8s.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"networkpolicies\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not create NetworkPolicies in namespaces managed by Red Hat.\" }, { \"webhookName\": \"node-validation-osd\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"nodes\", \"nodes/*\" ], \"scope\": \"*\" } ], \"documentString\": \"Managed OpenShift customers may not alter Node objects.\" }, { \"webhookName\": \"pod-validation\", \"rules\": [ { \"operations\": [ \"*\" ], \"apiGroups\": [ \"v1\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"pods\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may use tolerations on Pods that could cause those Pods to be scheduled on infra or master nodes.\" }, { \"webhookName\": \"prometheusrule-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"monitoring.coreos.com\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"prometheusrules\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not create PrometheusRule in namespaces managed by Red Hat.\" }, { \"webhookName\": \"regular-user-validation\", \"rules\": [ { \"operations\": [ \"*\" ], \"apiGroups\": [ \"cloudcredential.openshift.io\", \"machine.openshift.io\", \"admissionregistration.k8s.io\", \"addons.managed.openshift.io\", \"cloudingress.managed.openshift.io\", \"managed.openshift.io\", \"ocmagent.managed.openshift.io\", \"splunkforwarder.managed.openshift.io\", \"upgrade.managed.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"*/*\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"autoscaling.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterautoscalers\", \"machineautoscalers\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterversions\", \"clusterversions/status\", \"schedulers\", \"apiservers\", \"proxies\" ], \"scope\": \"*\" }, { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"configmaps\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"machineconfiguration.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"machineconfigs\", \"machineconfigpools\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"kubeapiservers\", \"openshiftapiservers\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"managed.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"subjectpermissions\", \"subjectpermissions/*\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"network.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"netnamespaces\", \"netnamespaces/*\" ], \"scope\": \"*\" } ], \"documentString\": \"Managed OpenShift customers may not manage any objects in the following APIGroups [autoscaling.openshift.io network.openshift.io machine.openshift.io admissionregistration.k8s.io addons.managed.openshift.io cloudingress.managed.openshift.io splunkforwarder.managed.openshift.io upgrade.managed.openshift.io managed.openshift.io ocmagent.managed.openshift.io config.openshift.io machineconfiguration.openshift.io operator.openshift.io cloudcredential.openshift.io], nor may Managed OpenShift customers alter the APIServer, KubeAPIServer, OpenShiftAPIServer, ClusterVersion, Proxy or SubjectPermission objects.\" }, { \"webhookName\": \"scc-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"security.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"securitycontextconstraints\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not modify the following default SCCs: [anyuid hostaccess hostmount-anyuid hostnetwork hostnetwork-v2 node-exporter nonroot nonroot-v2 privileged restricted restricted-v2]\" }, { \"webhookName\": \"sdn-migration-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"networks\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not modify the network config type because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring.\" }, { \"webhookName\": \"service-mutation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"services\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"LoadBalancer-type services on Managed OpenShift clusters must contain an additional annotation for managed policy compliance.\" }, { \"webhookName\": \"serviceaccount-validation\", \"rules\": [ { \"operations\": [ \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"serviceaccounts\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not delete the service accounts under the managed namespaces。\" }, { \"webhookName\": \"techpreviewnoupgrade-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"featuregates\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not use TechPreviewNoUpgrade FeatureGate that could prevent any future ability to do a y-stream upgrade to their clusters.\" } ]", "oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c \"/bin/nslookup -type=a myip.opendns.com resolver1.opendns.com | grep -E 'Address: [0-9.]+'\"", "spec: nodeSelector: role: worker", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc adm policy add-cluster-role-to-group self-provisioner system:authenticated:oauth", "spec: nodeSelector: role: worker", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc adm policy add-cluster-role-to-group self-provisioner system:authenticated:oauth", "rosa create ocm-role", "rosa create ocm-role --admin", "I: Creating ocm role ? Role prefix: ManagedOpenShift 1 ? Enable admin capabilities for the OCM role (optional): No 2 ? Permissions boundary ARN (optional): 3 ? Role Path (optional): 4 ? Role creation mode: auto 5 I: Creating role using 'arn:aws:iam::<ARN>:user/<UserName>' ? Create the 'ManagedOpenShift-OCM-Role-182' role? Yes 6 I: Created role 'ManagedOpenShift-OCM-Role-182' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' I: Linking OCM role ? OCM Role ARN: arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182 7 ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' role with organization '<AWS ARN>'? Yes 8 I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' with organization account '<AWS ARN>'", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"autoscaling:DescribeAutoScalingGroups\", \"ec2:AllocateAddress\", \"ec2:AssociateAddress\", \"ec2:AssociateDhcpOptions\", \"ec2:AssociateRouteTable\", \"ec2:AttachInternetGateway\", \"ec2:AttachNetworkInterface\", \"ec2:AuthorizeSecurityGroupEgress\", \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:CopyImage\", \"ec2:CreateDhcpOptions\", \"ec2:CreateInternetGateway\", \"ec2:CreateNatGateway\", \"ec2:CreateNetworkInterface\", \"ec2:CreateRoute\", \"ec2:CreateRouteTable\", \"ec2:CreateSecurityGroup\", \"ec2:CreateSubnet\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateVpc\", \"ec2:CreateVpcEndpoint\", \"ec2:DeleteDhcpOptions\", \"ec2:DeleteInternetGateway\", \"ec2:DeleteNatGateway\", \"ec2:DeleteNetworkInterface\", \"ec2:DeleteRoute\", \"ec2:DeleteRouteTable\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteSnapshot\", \"ec2:DeleteSubnet\", \"ec2:DeleteTags\", \"ec2:DeleteVolume\", \"ec2:DeleteVpc\", \"ec2:DeleteVpcEndpoints\", \"ec2:DeregisterImage\", \"ec2:DescribeAccountAttributes\", \"ec2:DescribeAddresses\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeImages\", \"ec2:DescribeInstanceAttribute\", \"ec2:DescribeInstanceCreditSpecifications\", \"ec2:DescribeInstances\", \"ec2:DescribeInstanceStatus\", \"ec2:DescribeInstanceTypeOfferings\", \"ec2:DescribeInstanceTypes\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeKeyPairs\", \"ec2:DescribeNatGateways\", \"ec2:DescribeNetworkAcls\", \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribePrefixLists\", \"ec2:DescribeRegions\", \"ec2:DescribeReservedInstancesOfferings\", \"ec2:DescribeRouteTables\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSecurityGroupRules\", \"ec2:DescribeSubnets\", \"ec2:DescribeTags\", \"ec2:DescribeVolumes\", \"ec2:DescribeVpcAttribute\", \"ec2:DescribeVpcClassicLink\", \"ec2:DescribeVpcClassicLinkDnsSupport\", \"ec2:DescribeVpcEndpoints\", \"ec2:DescribeVpcs\", \"ec2:DetachInternetGateway\", \"ec2:DisassociateRouteTable\", \"ec2:GetConsoleOutput\", \"ec2:GetEbsDefaultKmsKeyId\", \"ec2:ModifyInstanceAttribute\", \"ec2:ModifyNetworkInterfaceAttribute\", \"ec2:ModifySubnetAttribute\", \"ec2:ModifyVpcAttribute\", \"ec2:ReleaseAddress\", \"ec2:ReplaceRouteTableAssociation\", \"ec2:RevokeSecurityGroupEgress\", \"ec2:RevokeSecurityGroupIngress\", \"ec2:RunInstances\", \"ec2:StartInstances\", \"ec2:StopInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:AddTags\", \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\", \"elasticloadbalancing:AttachLoadBalancerToSubnets\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:CreateListener\", \"elasticloadbalancing:CreateLoadBalancer\", \"elasticloadbalancing:CreateLoadBalancerListeners\", \"elasticloadbalancing:CreateTargetGroup\", \"elasticloadbalancing:DeleteLoadBalancer\", \"elasticloadbalancing:DeleteTargetGroup\", \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\", \"elasticloadbalancing:DeregisterTargets\", \"elasticloadbalancing:DescribeAccountLimits\", \"elasticloadbalancing:DescribeInstanceHealth\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeTags\", \"elasticloadbalancing:DescribeTargetGroupAttributes\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyLoadBalancerAttributes\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\", \"elasticloadbalancing:SetSecurityGroups\", \"iam:AddRoleToInstanceProfile\", \"iam:CreateInstanceProfile\", \"iam:DeleteInstanceProfile\", \"iam:GetInstanceProfile\", \"iam:TagInstanceProfile\", \"iam:GetRole\", \"iam:GetRolePolicy\", \"iam:GetUser\", \"iam:ListAttachedRolePolicies\", \"iam:ListInstanceProfiles\", \"iam:ListInstanceProfilesForRole\", \"iam:ListRolePolicies\", \"iam:ListRoles\", \"iam:ListUserPolicies\", \"iam:ListUsers\", \"iam:PassRole\", \"iam:RemoveRoleFromInstanceProfile\", \"iam:SimulatePrincipalPolicy\", \"iam:TagRole\", \"iam:UntagRole\", \"route53:ChangeResourceRecordSets\", \"route53:ChangeTagsForResource\", \"route53:CreateHostedZone\", \"route53:DeleteHostedZone\", \"route53:GetAccountLimit\", \"route53:GetChange\", \"route53:GetHostedZone\", \"route53:ListHostedZones\", \"route53:ListHostedZonesByName\", \"route53:ListResourceRecordSets\", \"route53:ListTagsForResource\", \"route53:UpdateHostedZoneComment\", \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:DeleteObject\", \"s3:DeleteObjectVersion\", \"s3:GetAccelerateConfiguration\", \"s3:GetBucketAcl\", \"s3:GetBucketCORS\", \"s3:GetBucketLocation\", \"s3:GetBucketLogging\", \"s3:GetBucketObjectLockConfiguration\", \"s3:GetBucketPolicy\", \"s3:GetBucketRequestPayment\", \"s3:GetBucketTagging\", \"s3:GetBucketVersioning\", \"s3:GetBucketWebsite\", \"s3:GetEncryptionConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetObject\", \"s3:GetObjectAcl\", \"s3:GetObjectTagging\", \"s3:GetObjectVersion\", \"s3:GetReplicationConfiguration\", \"s3:ListBucket\", \"s3:ListBucketVersions\", \"s3:PutBucketAcl\", \"s3:PutBucketPolicy\", \"s3:PutBucketTagging\", \"s3:PutBucketVersioning\", \"s3:PutEncryptionConfiguration\", \"s3:PutObject\", \"s3:PutObjectAcl\", \"s3:PutObjectTagging\", \"servicequotas:GetServiceQuota\", \"servicequotas:ListAWSDefaultServiceQuotas\", \"sts:AssumeRole\", \"sts:AssumeRoleWithWebIdentity\", \"sts:GetCallerIdentity\", \"tag:GetResources\", \"tag:UntagResources\", \"ec2:CreateVpcEndpointServiceConfiguration\", \"ec2:DeleteVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServicePermissions\", \"ec2:DescribeVpcEndpointServices\", \"ec2:ModifyVpcEndpointServicePermissions\", \"kms:DescribeKey\", \"cloudwatch:GetMetricData\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\" ], \"Resource\": \"*\", \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/red-hat-managed\": \"true\" } } } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": [ \"ec2.amazonaws.com\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"ReadPermissions\", \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeInstances\", \"ec2:DescribeRouteTables\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSubnets\", \"ec2:DescribeVpcs\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:DescribeLoadBalancerPolicies\" ], \"Resource\": [ \"*\" ] }, { \"Sid\": \"KMSDescribeKey\", \"Effect\": \"Allow\", \"Action\": [ \"kms:DescribeKey\" ], \"Resource\": [ \"*\" ], \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/red-hat\": \"true\" } } }, { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AttachVolume\", \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:CreateSecurityGroup\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteVolume\", \"ec2:DetachVolume\", \"ec2:ModifyInstanceAttribute\", \"ec2:ModifyVolume\", \"ec2:RevokeSecurityGroupIngress\", \"elasticloadbalancing:AddTags\", \"elasticloadbalancing:AttachLoadBalancerToSubnets\", \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\", \"elasticloadbalancing:CreateListener\", \"elasticloadbalancing:CreateLoadBalancer\", \"elasticloadbalancing:CreateLoadBalancerPolicy\", \"elasticloadbalancing:CreateLoadBalancerListeners\", \"elasticloadbalancing:CreateTargetGroup\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:DeleteListener\", \"elasticloadbalancing:DeleteLoadBalancer\", \"elasticloadbalancing:DeleteLoadBalancerListeners\", \"elasticloadbalancing:DeleteTargetGroup\", \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\", \"elasticloadbalancing:DeregisterTargets\", \"elasticloadbalancing:DetachLoadBalancerFromSubnets\", \"elasticloadbalancing:ModifyListener\", \"elasticloadbalancing:ModifyLoadBalancerAttributes\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\", \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": [ \"ec2.amazonaws.com\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeInstances\", \"ec2:DescribeRegions\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::%{aws_account_id}:role/RH-Technical-Support-Access\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"cloudtrail:DescribeTrails\", \"cloudtrail:LookupEvents\", \"cloudwatch:GetMetricData\", \"cloudwatch:GetMetricStatistics\", \"cloudwatch:ListMetrics\", \"ec2-instance-connect:SendSerialConsoleSSHPublicKey\", \"ec2:CopySnapshot\", \"ec2:CreateNetworkInsightsPath\", \"ec2:CreateSnapshot\", \"ec2:CreateSnapshots\", \"ec2:CreateTags\", \"ec2:DeleteNetworkInsightsAnalysis\", \"ec2:DeleteNetworkInsightsPath\", \"ec2:DeleteTags\", \"ec2:DescribeAccountAttributes\", \"ec2:DescribeAddresses\", \"ec2:DescribeAddressesAttribute\", \"ec2:DescribeAggregateIdFormat\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeByoipCidrs\", \"ec2:DescribeCapacityReservations\", \"ec2:DescribeCarrierGateways\", \"ec2:DescribeClassicLinkInstances\", \"ec2:DescribeClientVpnAuthorizationRules\", \"ec2:DescribeClientVpnConnections\", \"ec2:DescribeClientVpnEndpoints\", \"ec2:DescribeClientVpnRoutes\", \"ec2:DescribeClientVpnTargetNetworks\", \"ec2:DescribeCoipPools\", \"ec2:DescribeCustomerGateways\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeEgressOnlyInternetGateways\", \"ec2:DescribeIamInstanceProfileAssociations\", \"ec2:DescribeIdentityIdFormat\", \"ec2:DescribeIdFormat\", \"ec2:DescribeImageAttribute\", \"ec2:DescribeImages\", \"ec2:DescribeInstanceAttribute\", \"ec2:DescribeInstances\", \"ec2:DescribeInstanceStatus\", \"ec2:DescribeInstanceTypeOfferings\", \"ec2:DescribeInstanceTypes\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeIpv6Pools\", \"ec2:DescribeKeyPairs\", \"ec2:DescribeLaunchTemplates\", \"ec2:DescribeLocalGatewayRouteTables\", \"ec2:DescribeLocalGatewayRouteTableVirtualInterfaceGroupAssociations\", \"ec2:DescribeLocalGatewayRouteTableVpcAssociations\", \"ec2:DescribeLocalGateways\", \"ec2:DescribeLocalGatewayVirtualInterfaceGroups\", \"ec2:DescribeLocalGatewayVirtualInterfaces\", \"ec2:DescribeManagedPrefixLists\", \"ec2:DescribeNatGateways\", \"ec2:DescribeNetworkAcls\", \"ec2:DescribeNetworkInsightsAnalyses\", \"ec2:DescribeNetworkInsightsPaths\", \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribePlacementGroups\", \"ec2:DescribePrefixLists\", \"ec2:DescribePrincipalIdFormat\", \"ec2:DescribePublicIpv4Pools\", \"ec2:DescribeRegions\", \"ec2:DescribeReservedInstances\", \"ec2:DescribeRouteTables\", \"ec2:DescribeScheduledInstances\", \"ec2:DescribeSecurityGroupReferences\", \"ec2:DescribeSecurityGroupRules\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSnapshotAttribute\", \"ec2:DescribeSnapshots\", \"ec2:DescribeSpotFleetInstances\", \"ec2:DescribeStaleSecurityGroups\", \"ec2:DescribeSubnets\", \"ec2:DescribeTags\", \"ec2:DescribeTransitGatewayAttachments\", \"ec2:DescribeTransitGatewayConnectPeers\", \"ec2:DescribeTransitGatewayConnects\", \"ec2:DescribeTransitGatewayMulticastDomains\", \"ec2:DescribeTransitGatewayPeeringAttachments\", \"ec2:DescribeTransitGatewayRouteTables\", \"ec2:DescribeTransitGateways\", \"ec2:DescribeTransitGatewayVpcAttachments\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:DescribeVpcAttribute\", \"ec2:DescribeVpcClassicLink\", \"ec2:DescribeVpcClassicLinkDnsSupport\", \"ec2:DescribeVpcEndpointConnectionNotifications\", \"ec2:DescribeVpcEndpointConnections\", \"ec2:DescribeVpcEndpoints\", \"ec2:DescribeVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServicePermissions\", \"ec2:DescribeVpcEndpointServices\", \"ec2:DescribeVpcPeeringConnections\", \"ec2:DescribeVpcs\", \"ec2:DescribeVpnConnections\", \"ec2:DescribeVpnGateways\", \"ec2:GetAssociatedIpv6PoolCidrs\", \"ec2:GetConsoleOutput\", \"ec2:GetManagedPrefixListEntries\", \"ec2:GetSerialConsoleAccessStatus\", \"ec2:GetTransitGatewayAttachmentPropagations\", \"ec2:GetTransitGatewayMulticastDomainAssociations\", \"ec2:GetTransitGatewayPrefixListReferences\", \"ec2:GetTransitGatewayRouteTableAssociations\", \"ec2:GetTransitGatewayRouteTablePropagations\", \"ec2:ModifyInstanceAttribute\", \"ec2:RebootInstances\", \"ec2:RunInstances\", \"ec2:SearchLocalGatewayRoutes\", \"ec2:SearchTransitGatewayMulticastGroups\", \"ec2:SearchTransitGatewayRoutes\", \"ec2:StartInstances\", \"ec2:StartNetworkInsightsAnalysis\", \"ec2:StopInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:DescribeAccountLimits\", \"elasticloadbalancing:DescribeInstanceHealth\", \"elasticloadbalancing:DescribeListenerCertificates\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DescribeLoadBalancerPolicies\", \"elasticloadbalancing:DescribeLoadBalancerPolicyTypes\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeRules\", \"elasticloadbalancing:DescribeSSLPolicies\", \"elasticloadbalancing:DescribeTags\", \"elasticloadbalancing:DescribeTargetGroupAttributes\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"iam:GetRole\", \"iam:ListRoles\", \"kms:CreateGrant\", \"route53:GetHostedZone\", \"route53:GetHostedZoneCount\", \"route53:ListHostedZones\", \"route53:ListHostedZonesByName\", \"route53:ListResourceRecordSets\", \"s3:GetBucketTagging\", \"s3:GetObjectAcl\", \"s3:GetObjectTagging\", \"s3:ListAllMyBuckets\", \"sts:DecodeAuthorizationMessage\", \"tiros:CreateQuery\", \"tiros:GetQueryAnswer\", \"tiros:GetQueryExplanation\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\" ], \"Resource\": [ \"arn:aws:s3:::managed-velero*\", \"arn:aws:s3:::*image-registry*\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer\" ] }, \"Action\": [ \"sts:AssumeRole\" ], \"Condition\": {\"StringEquals\": {\"sts:ExternalId\": \"%{ocm_organization_id}\"}} } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::%{aws_account_id}:role/RH-Managed-OpenShift-Installer\" ] }, \"Action\": [ \"sts:AssumeRole\" ] } ] }", "I: Attached trust policy to role 'testrole-Worker-Role(https://console.aws.amazon.com/iam/home?#/roles/testrole-Worker-Role)': ******************", "I: Attached trust policy to role 'test-Support-Role': {\"Version\": \"2012-10-17\", \"Statement\": [{\"Action\": [\"sts:AssumeRole\"], \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam::000000000000:role/RH-Technical-Support-00000000\"]}}]}", "I: Attached policy 'ROSASRESupportPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSASRESupportPolicy)' to role 'test-HCP-ROSA-Support-Role(https://console.aws.amazon.com/iam/home?#/roles/test-HCP-ROSA-Support-Role)'", "I: Attached policy 'arn:aws:iam::000000000000:policy/testrole-Worker-Role-Policy' to role 'testrole-Worker-Role(https://console.aws.amazon.com/iam/home?#/roles/testrole-Worker-Role)'", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"elasticloadbalancing:DescribeLoadBalancers\", \"route53:ListHostedZones\", \"route53:ListTagsForResources\", \"route53:ChangeResourceRecordSets\", \"tag:GetResources\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AttachVolume\", \"ec2:CreateSnapshot\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:DeleteSnapshot\", \"ec2:DeleteTags\", \"ec2:DeleteVolume\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeInstances\", \"ec2:DescribeSnapshots\", \"ec2:DescribeTags\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumesModifications\", \"ec2:DetachVolume\", \"ec2:EnableFastSnapshotRestores\", \"ec2:ModifyVolume\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:CreateTags\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeImages\", \"ec2:DescribeInstances\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeInstanceTypes\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeRegions\", \"ec2:DescribeSubnets\", \"ec2:DescribeVpcs\", \"ec2:RunInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", \"iam:PassRole\", \"iam:CreateServiceLinkedRole\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"kms:Decrypt\", \"kms:Encrypt\", \"kms:GenerateDataKey\", \"kms:GenerateDataKeyWithoutPlainText\", \"kms:DescribeKey\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"kms:RevokeGrant\", \"kms:CreateGrant\", \"kms:ListGrants\" ], \"Resource\": \"*\", \"Condition\": { \"Bool\": { \"kms:GrantIsForAWSResource\": true } } } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"iam:GetUser\", \"iam:GetUserPolicy\", \"iam:ListAccessKeys\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutBucketPublicAccessBlock\", \"s3:GetBucketPublicAccessBlock\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": \"*\" } ] }", "rosa create account-roles --mode manual", "aws iam create-role --role-name ManagedOpenShift-Installer-Role --assume-role-policy-document file://sts_installer_trust_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=installer aws iam put-role-policy --role-name ManagedOpenShift-Installer-Role --policy-name ManagedOpenShift-Installer-Role-Policy --policy-document file://sts_installer_permission_policy.json aws iam create-role --role-name ManagedOpenShift-ControlPlane-Role --assume-role-policy-document file://sts_instance_controlplane_trust_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_controlplane aws iam put-role-policy --role-name ManagedOpenShift-ControlPlane-Role --policy-name ManagedOpenShift-ControlPlane-Role-Policy --policy-document file://sts_instance_controlplane_permission_policy.json aws iam create-role --role-name ManagedOpenShift-Worker-Role --assume-role-policy-document file://sts_instance_worker_trust_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_worker aws iam put-role-policy --role-name ManagedOpenShift-Worker-Role --policy-name ManagedOpenShift-Worker-Role-Policy --policy-document file://sts_instance_worker_permission_policy.json aws iam create-role --role-name ManagedOpenShift-Support-Role --assume-role-policy-document file://sts_support_trust_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=support aws iam put-role-policy --role-name ManagedOpenShift-Support-Role --policy-name ManagedOpenShift-Support-Role-Policy --policy-document file://sts_support_permission_policy.json aws iam create-policy --policy-name ManagedOpenShift-openshift-ingress-operator-cloud-credentials --policy-document file://openshift_ingress_operator_cloud_credentials_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-ingress-operator Key=operator_name,Value=cloud-credentials aws iam create-policy --policy-name ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent --policy-document file://openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-cluster-csi-drivers Key=operator_name,Value=ebs-cloud-credentials aws iam create-policy --policy-name ManagedOpenShift-openshift-machine-api-aws-cloud-credentials --policy-document file://openshift_machine_api_aws_cloud_credentials_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-machine-api Key=operator_name,Value=aws-cloud-credentials aws iam create-policy --policy-name ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede --policy-document file://openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-cloud-credential-operator Key=operator_name,Value=cloud-credential-operator-iam-ro-creds aws iam create-policy --policy-name ManagedOpenShift-openshift-image-registry-installer-cloud-creden --policy-document file://openshift_image_registry_installer_cloud_credentials_policy.json --tags Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials", "rosa create account-roles --mode auto", "I: Creating roles using 'arn:aws:iam::<ARN>:user/<UserID>' ? Create the 'ManagedOpenShift-Installer-Role' role? Yes I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Installer-Role' ? Create the 'ManagedOpenShift-ControlPlane-Role' role? Yes I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-ControlPlane-Role' ? Create the 'ManagedOpenShift-Worker-Role' role? Yes I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Worker-Role' ? Create the 'ManagedOpenShift-Support-Role' role? Yes I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-Support-Role' ? Create the operator policies? Yes I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: Created policy with ARN 'arn:aws:iam::<ARN>:policy/ManagedOpenShift-openshift-cloud-network-config-controller-cloud' I: To create a cluster with these roles, run the following command: rosa create cluster --sts", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"autoscaling:DescribeAutoScalingGroups\", \"ec2:AllocateAddress\", \"ec2:AssociateAddress\", \"ec2:AttachNetworkInterface\", \"ec2:AuthorizeSecurityGroupEgress\", \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:CopyImage\", \"ec2:CreateNetworkInterface\", \"ec2:CreateSecurityGroup\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:DeleteNetworkInterface\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteSnapshot\", \"ec2:DeleteTags\", \"ec2:DeleteVolume\", \"ec2:DeregisterImage\", \"ec2:DescribeAccountAttributes\", \"ec2:DescribeAddresses\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeImages\", \"ec2:DescribeInstanceAttribute\", \"ec2:DescribeInstanceCreditSpecifications\", \"ec2:DescribeInstances\", \"ec2:DescribeInstanceStatus\", \"ec2:DescribeInstanceTypeOfferings\", \"ec2:DescribeInstanceTypes\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeKeyPairs\", \"ec2:DescribeNatGateways\", \"ec2:DescribeNetworkAcls\", \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribePrefixLists\", \"ec2:DescribeRegions\", \"ec2:DescribeReservedInstancesOfferings\", \"ec2:DescribeRouteTables\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSecurityGroupRules\", \"ec2:DescribeSubnets\", \"ec2:DescribeTags\", \"ec2:DescribeVolumes\", \"ec2:DescribeVpcAttribute\", \"ec2:DescribeVpcClassicLink\", \"ec2:DescribeVpcClassicLinkDnsSupport\", \"ec2:DescribeVpcEndpoints\", \"ec2:DescribeVpcs\", \"ec2:GetConsoleOutput\", \"ec2:GetEbsDefaultKmsKeyId\", \"ec2:ModifyInstanceAttribute\", \"ec2:ModifyNetworkInterfaceAttribute\", \"ec2:ReleaseAddress\", \"ec2:RevokeSecurityGroupEgress\", \"ec2:RevokeSecurityGroupIngress\", \"ec2:RunInstances\", \"ec2:StartInstances\", \"ec2:StopInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:AddTags\", \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\", \"elasticloadbalancing:AttachLoadBalancerToSubnets\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:CreateListener\", \"elasticloadbalancing:CreateLoadBalancer\", \"elasticloadbalancing:CreateLoadBalancerListeners\", \"elasticloadbalancing:CreateTargetGroup\", \"elasticloadbalancing:DeleteLoadBalancer\", \"elasticloadbalancing:DeleteTargetGroup\", \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\", \"elasticloadbalancing:DeregisterTargets\", \"elasticloadbalancing:DescribeInstanceHealth\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeTags\", \"elasticloadbalancing:DescribeTargetGroupAttributes\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyLoadBalancerAttributes\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\", \"elasticloadbalancing:SetSecurityGroups\", \"iam:AddRoleToInstanceProfile\", \"iam:CreateInstanceProfile\", \"iam:DeleteInstanceProfile\", \"iam:GetInstanceProfile\", \"iam:TagInstanceProfile\", \"iam:GetRole\", \"iam:GetRolePolicy\", \"iam:GetUser\", \"iam:ListAttachedRolePolicies\", \"iam:ListInstanceProfiles\", \"iam:ListInstanceProfilesForRole\", \"iam:ListRolePolicies\", \"iam:ListRoles\", \"iam:ListUserPolicies\", \"iam:ListUsers\", \"iam:PassRole\", \"iam:RemoveRoleFromInstanceProfile\", \"iam:SimulatePrincipalPolicy\", \"iam:TagRole\", \"iam:UntagRole\", \"route53:ChangeResourceRecordSets\", \"route53:ChangeTagsForResource\", \"route53:CreateHostedZone\", \"route53:DeleteHostedZone\", \"route53:GetAccountLimit\", \"route53:GetChange\", \"route53:GetHostedZone\", \"route53:ListHostedZones\", \"route53:ListHostedZonesByName\", \"route53:ListResourceRecordSets\", \"route53:ListTagsForResource\", \"route53:UpdateHostedZoneComment\", \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:DeleteObject\", \"s3:GetAccelerateConfiguration\", \"s3:GetBucketAcl\", \"s3:GetBucketCORS\", \"s3:GetBucketLocation\", \"s3:GetBucketLogging\", \"s3:GetBucketObjectLockConfiguration\", \"s3:GetBucketPolicy\", \"s3:GetBucketRequestPayment\", \"s3:GetBucketTagging\", \"s3:GetBucketVersioning\", \"s3:GetBucketWebsite\", \"s3:GetEncryptionConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetObject\", \"s3:GetObjectAcl\", \"s3:GetObjectTagging\", \"s3:GetObjectVersion\", \"s3:GetReplicationConfiguration\", \"s3:ListBucket\", \"s3:ListBucketVersions\", \"s3:PutBucketAcl\", \"s3:PutBucketPolicy\", \"s3:PutBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:PutObject\", \"s3:PutObjectAcl\", \"s3:PutObjectTagging\", \"servicequotas:GetServiceQuota\", \"servicequotas:ListAWSDefaultServiceQuotas\", \"sts:AssumeRole\", \"sts:AssumeRoleWithWebIdentity\", \"sts:GetCallerIdentity\", \"tag:GetResources\", \"tag:UntagResources\", \"kms:DescribeKey\", \"cloudwatch:GetMetricData\", \"ec2:CreateRoute\", \"ec2:DeleteRoute\", \"ec2:CreateVpcEndpoint\", \"ec2:DeleteVpcEndpoints\", \"ec2:CreateVpcEndpointServiceConfiguration\", \"ec2:DeleteVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServiceConfigurations\", \"ec2:DescribeVpcEndpointServicePermissions\", \"ec2:DescribeVpcEndpointServices\", \"ec2:ModifyVpcEndpointServicePermissions\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\" ], \"Resource\": \"*\", \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/red-hat-managed\": \"true\" } } } ] }", "curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.18/sts_installer_core_permission_boundary_policy.json", "aws iam create-policy --policy-name rosa-core-permissions-boundary-policy --policy-document file://./rosa-installer-core.json --description \"ROSA installer core permission boundary policy, the minimum permission set, allows BYO-VPC, disallows PrivateLink\"", "{ \"Policy\": { \"PolicyName\": \"rosa-core-permissions-boundary-policy\", \"PolicyId\": \"<Policy ID>\", \"Arn\": \"arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy\", \"Path\": \"/\", \"DefaultVersionId\": \"v1\", \"AttachmentCount\": 0, \"PermissionsBoundaryUsageCount\": 0, \"IsAttachable\": true, \"CreateDate\": \"<CreateDate>\", \"UpdateDate\": \"<UpdateDate>\" } }", "aws iam put-role-permissions-boundary --role-name ManagedOpenShift-Installer-Role --permissions-boundary arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy", "aws iam get-role --role-name ManagedOpenShift-Installer-Role --output text | grep PERMISSIONSBOUNDARY", "PERMISSIONSBOUNDARY arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy Policy", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:ModifyVpcEndpointServiceConfiguration\", \"route53:ListHostedZonesByVPC\", \"route53:CreateVPCAssociationAuthorization\", \"route53:AssociateVPCWithHostedZone\", \"route53:DeleteVPCAssociationAuthorization\", \"route53:DisassociateVPCFromHostedZone\", \"route53:ChangeResourceRecordSets\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AssociateDhcpOptions\", \"ec2:AssociateRouteTable\", \"ec2:AttachInternetGateway\", \"ec2:CreateDhcpOptions\", \"ec2:CreateInternetGateway\", \"ec2:CreateNatGateway\", \"ec2:CreateRouteTable\", \"ec2:CreateSubnet\", \"ec2:CreateVpc\", \"ec2:DeleteDhcpOptions\", \"ec2:DeleteInternetGateway\", \"ec2:DeleteNatGateway\", \"ec2:DeleteRouteTable\", \"ec2:DeleteSubnet\", \"ec2:DeleteVpc\", \"ec2:DetachInternetGateway\", \"ec2:DisassociateRouteTable\", \"ec2:ModifySubnetAttribute\", \"ec2:ModifyVpcAttribute\", \"ec2:ReplaceRouteTableAssociation\" ], \"Resource\": \"*\" } ] }", "rosa create operator-roles --mode manual --cluster <cluster_name>", "aws iam create-role --role-name <cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credent --assume-role-policy-document file://operator_cluster_csi_drivers_ebs_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-cluster-csi-drivers Key=operator_name,Value=ebs-cloud-credentials aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credent --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent aws iam create-role --role-name <cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials --assume-role-policy-document file://operator_machine_api_aws_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-machine-api Key=operator_name,Value=aws-cloud-credentials aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials aws iam create-role --role-name <cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-crede --assume-role-policy-document file://operator_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-cloud-credential-operator Key=operator_name,Value=cloud-credential-operator-iam-ro-creds aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-crede --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede aws iam create-role --role-name <cluster_name>-<hash>-openshift-image-registry-installer-cloud-creden --assume-role-policy-document file://operator_image_registry_installer_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-image-registry-installer-cloud-creden --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden aws iam create-role --role-name <cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials --assume-role-policy-document file://operator_ingress_operator_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=<id> Key=rosa_openshift_version,Value=<openshift_version> Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-ingress-operator Key=operator_name,Value=cloud-credentials aws iam attach-role-policy --role-name <cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials --policy-arn arn:aws:iam::<aws_account_id>:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials", "rosa create oidc-provider --mode manual --cluster <cluster_name>", "aws iam create-open-id-connect-provider --url https://oidc.op1.openshiftapps.com/<oidc_config_id> \\ 1 --client-id-list openshift sts.<aws_region>.amazonaws.com --thumbprint-list <thumbprint> 2", "rosa create oidc-provider --oidc-config-id <oidc_config_id> --mode auto -y", "I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/241rh9ql5gpu99d7leokhvkp8icnalpf'", "rosa create oidc-config --mode=auto --yes", "? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'", "export OIDC_ID=<oidc_config_id> 1", "echo USDOIDC_ID", "13cdr6b", "rosa list oidc-config", "ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN", "rosa create oidc-config --raw-files", "rosa create oidc-config --mode=<auto|manual>", "rosa create oidc-config --managed", "W: For a managed OIDC Config only auto mode is supported. However, you may choose the provider creation mode ? OIDC Provider creation mode: auto I: Setting up managed OIDC configuration I: Please run the following command to create a cluster with this oidc config rosa create cluster --sts --oidc-config-id 233jnu62i9aphpucsj9kueqlkr1vcgra I: Creating OIDC provider using 'arn:aws:iam::242819244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::242819244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/233jnu62i9aphpucsj9kueqlkr1vcgra'", "rosa create oidc-config --mode=auto --yes", "? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'", "export OIDC_ID=<oidc_config_id> 1", "echo USDOIDC_ID", "13cdr6b", "rosa list oidc-config", "ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN", "rosa create oidc-config --raw-files", "rosa create oidc-config --mode=<auto|manual>", "rosa create oidc-config --managed", "W: For a managed OIDC Config only auto mode is supported. However, you may choose the provider creation mode ? OIDC Provider creation mode: auto I: Setting up managed OIDC configuration I: Please run the following command to create a cluster with this oidc config rosa create cluster --sts --oidc-config-id 233jnu62i9aphpucsj9kueqlkr1vcgra I: Creating OIDC provider using 'arn:aws:iam::242819244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::242819244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/233jnu62i9aphpucsj9kueqlkr1vcgra'", "rosa create oidc-provider --mode manual --cluster <cluster_name>", "aws iam create-open-id-connect-provider --url https://oidc.op1.openshiftapps.com/<oidc_config_id> \\ 1 --client-id-list openshift sts.<aws_region>.amazonaws.com --thumbprint-list <thumbprint> 2", "rosa create oidc-provider --oidc-config-id <oidc_config_id> --mode auto -y", "I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/241rh9ql5gpu99d7leokhvkp8icnalpf'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/introduction_to_rosa/index
4.44. device-mapper-multipath
4.44. device-mapper-multipath 4.44.1. RHBA-2011-1527 - device-mapper-multipath bug fix and enhancement update Updated device-mapper-multipath packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools to manage multipath devices using the device-mapper multipath kernel module. Bug Fixes BZ# 677449 DM Multipath removed a device if it failed to check the device status due to insufficient memory. This happened because the command checking if the device map existed failed as the system returned an error. With this update, Multipath no longer returns an error under these circumstances and no devices are removed if the system runs out of memory while checking device status. BZ# 678673 If a device-mapper-multipath device was open but all attached device paths had been lost, the device was unable to create a new table with no device paths. As a concequence the multipath -ll command returned output indicating that no paths to the device were available with confusing "failed faulty running" rows presenting the missing paths. Multipath devices now reload tables with no device paths correctly. BZ# 689504 Device paths could fail even if unavailable only temporarily. This happened because the RDAC (Redundant Disk Array Controller) checker function did not recheck the status of hosts if it had received a temporary error code. The function now rechecks the path after it has received such error codes and the path failures are transient as expected. BZ# 697386 A bug fix introduced a race condition between the main thread and the thread running the checkerloop routine as the checkerloop thread was created with deferred cancellation type. The checkerloop thread continued running and attempted to access a property, which had been previously unallocated by the main thread. This caused the multipathd daemon to shutdown with a segmentation fault. Now the checkerloop thread checks if a shutdown is in progress and the deamon shuts down gracefully. BZ# 700169 The Multipath daemon failed to include some ghost paths when counting the number of active paths; however, when the ghost paths failed, they were subtracted from the number of active paths. This caused multipathd to fail IO requests even though some paths were still available. The Multipath daemon now counts ghost paths correctly and no longer fails IO requests while there are still active paths available. BZ# 705854 If the user set dev_loss_tmo to a value greater than 600 in multipath.conf without setting the fast_io_fail_tmo value, the multipathd daemon did not notify the user that fast_io_fail_tmo was not set. Multipath now issues a warning that fast_io_fail_tmo is not set under such circumstances. BZ# 706555 On shared-storage multipath setups that set failback to manual , multipath could keep alternating from the failover pathgroup to the primary pathgroup infinitely. This happened because multipath was incorrectly failing back to the primary pathgroup whenever a path priority changed. With this update, multipath no longer fails back to the primary pathgroup when a path's priority changes under such circumstances. BZ# 707560 If the multipath device was deleted while a path was being checked, multipathd did not abort the path check and terminated unexpectedly when trying to access the multipath device information. The Multipath daemon now aborts any path checks when the multipath device is removed and the problem no longer occurs. BZ# 714821 The Multipath daemon was removing a multipath device twice. This could cause multipathd to access memory already used for another purpose, and caused the multipathd daemon to terminate unexpectedly. The multipathd daemon now removes the device once and the problem no longer occurs. BZ# 719571 The kpartx utility built partition devices for invalid GUID partition tables (GPT) because it did not validate the size of GUID partitions. The kpartx utility now checks the partition size, and does not build devices for invalid GPTs. BZ# 723168 Multipath previously returned an unclear error message when it failed to find rport_id. The returned message and its severity have been adjusted. BZ# 725541 Several upstream commits have been included in the device-mapper-multipath package providing a number of bug fixes and enhancements over the version. BZ# 738298 Anaconda failed to recognize an existing filesystem on a zSeries Linux fibre-channel adapter (zFCP) LUN and marked it as 'Unknown' when reinstalling the system. This happened due to an incorrect setting of the DM_UDEV_DISABLE_DISK_RULES_FLAG property. Filesystem on a multipath zFCP LUN is now correctly recognized during the installation. BZ# 747604 The asynchronous TUR path checker caused multipathd to terminate unexpectedly due to memory corruption. This happened if multipathd attempted to delete a path while the asynchronous TUR checker was running on the path. The asynchronous TUR checker code has been removed, and multipathd no longer crashes on path removal. Enhancements BZ# 636009 Multipath now supports up to 8000 device paths. BZ# 683616 To provide support for Asymmetric Logical Unit Access (ALUA), the RDAC checker has been modified to work better with devices in IOSHIP mode. The checker now sets the Task Aborted Status (TAS) bit to 1 if the TAS bit is set to 0 and changeable on a LUN (Logical Unit Number) discovery. The function now also reports PATH_UP for both the path groups in the RDAC storage in IOSHIP mode. BZ# 694602 To run multipath on IBM BladeCenter S-series with RAIDed Shared Storage Module (RSSM) demanded a manual multipath configuration to enable RSSM. Multipath now configures the server automatically. BZ# 699577 The text in the defaults multipaths devices sections of the multipath.conf man page has been improved to provide a better clarification. BZ# 713754 The rr_min_io_rq option has been added to the default , devices , and multipaths sections of the multipath.conf file. This option defines the number of I/O requests to route to a path before switching to the path in the current path group. Note that the rr_min_io option is no longer used. BZ# 710478 UID, GID, and mode owner settings defined in /etc/multipath.conf for a multipath device are ignored. These access permissions are now set with the udev rules. Users are advised to upgrade to these updated device-mapper-multipath packages, which fix these bugs and add these enhancements. 4.44.2. RHBA-2012:0502 - device-mapper-multipath bug fix update Updated device-mapper-multipath packages that fix one bug are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools to manage multipath devices using the device-mapper multipath kernel module. Bug Fix BZ# 802433 Device-Mapper Multipath uses certain regular expressions in the built-in device configurations to determine a multipath device so that the correct configuration can be applied to the device. Previously, some regular expressions for the device vendor and product ID were set too broad. As a consequence, some devices could be matched with incorrect device configurations. With this update, the product and vendor regular expressions have been set more strict so that all multipath devices can now be properly configured. All users of device-mapper-multipath are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/device-mapper-multipath
Chapter 7. Config [imageregistry.operator.openshift.io/v1]
Chapter 7. Config [imageregistry.operator.openshift.io/v1] Description Config is the configuration object for a registry instance managed by the registry operator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ImageRegistrySpec defines the specs for the running registry. status object ImageRegistryStatus reports image registry operational status. 7.1.1. .spec Description ImageRegistrySpec defines the specs for the running registry. Type object Required replicas Property Type Description affinity object affinity is a group of node affinity scheduling rules for the image registry pod(s). defaultRoute boolean defaultRoute indicates whether an external facing route for the registry should be created using the default generated hostname. disableRedirect boolean disableRedirect controls whether to route all data through the Registry, rather than redirecting to the backend. httpSecret string httpSecret is the value needed by the registry to secure uploads, generated by default. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". logging integer logging is deprecated, use logLevel instead. managementState string managementState indicates whether and how the operator should manage the component nodeSelector object (string) nodeSelector defines the node selection constraints for the registry pod. observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". proxy object proxy defines the proxy to be used when calling master api, upstream registries, etc. readOnly boolean readOnly indicates whether the registry instance should reject attempts to push new images or delete existing ones. replicas integer replicas determines the number of registry instances to run. requests object requests controls how many parallel requests a given registry instance will handle before queuing additional requests. resources object resources defines the resource requests+limits for the registry pod. rolloutStrategy string rolloutStrategy defines rollout strategy for the image registry deployment. routes array routes defines additional external facing routes which should be created for the registry. routes[] object ImageRegistryConfigRoute holds information on external route access to image registry. storage object storage details for configuring registry storage, e.g. S3 bucket coordinates. tolerations array tolerations defines the tolerations for the registry pod. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array topologySpreadConstraints specify how to spread matching pods among the given topology. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 7.1.2. .spec.affinity Description affinity is a group of node affinity scheduling rules for the image registry pod(s). Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 7.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 7.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 7.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 7.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 7.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 7.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 7.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 7.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 7.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 7.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 7.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 7.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 7.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 7.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 7.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 7.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 7.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 7.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 7.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 7.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 7.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 7.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 7.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 7.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 7.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 7.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 7.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 7.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.54. .spec.proxy Description proxy defines the proxy to be used when calling master api, upstream registries, etc. Type object Property Type Description http string http defines the proxy to be used by the image registry when accessing HTTP endpoints. https string https defines the proxy to be used by the image registry when accessing HTTPS endpoints. noProxy string noProxy defines a comma-separated list of host names that shouldn't go through any proxy. 7.1.55. .spec.requests Description requests controls how many parallel requests a given registry instance will handle before queuing additional requests. Type object Property Type Description read object read defines limits for image registry's reads. write object write defines limits for image registry's writes. 7.1.56. .spec.requests.read Description read defines limits for image registry's reads. Type object Property Type Description maxInQueue integer maxInQueue sets the maximum queued api requests to the registry. maxRunning integer maxRunning sets the maximum in flight api requests to the registry. maxWaitInQueue string maxWaitInQueue sets the maximum time a request can wait in the queue before being rejected. 7.1.57. .spec.requests.write Description write defines limits for image registry's writes. Type object Property Type Description maxInQueue integer maxInQueue sets the maximum queued api requests to the registry. maxRunning integer maxRunning sets the maximum in flight api requests to the registry. maxWaitInQueue string maxWaitInQueue sets the maximum time a request can wait in the queue before being rejected. 7.1.58. .spec.resources Description resources defines the resource requests+limits for the registry pod. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 7.1.59. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 7.1.60. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 7.1.61. .spec.routes Description routes defines additional external facing routes which should be created for the registry. Type array 7.1.62. .spec.routes[] Description ImageRegistryConfigRoute holds information on external route access to image registry. Type object Required name Property Type Description hostname string hostname for the route. name string name of the route to be created. secretName string secretName points to secret containing the certificates to be used by the route. 7.1.63. .spec.storage Description storage details for configuring registry storage, e.g. S3 bucket coordinates. Type object Property Type Description azure object azure represents configuration that uses Azure Blob Storage. emptyDir object emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. gcs object gcs represents configuration that uses Google Cloud Storage. ibmcos object ibmcos represents configuration that uses IBM Cloud Object Storage. managementState string managementState indicates if the operator manages the underlying storage unit. If Managed the operator will remove the storage when this operator gets Removed. oss object Oss represents configuration that uses Alibaba Cloud Object Storage Service. pvc object pvc represents configuration that uses a PersistentVolumeClaim. s3 object s3 represents configuration that uses Amazon Simple Storage Service. swift object swift represents configuration that uses OpenStack Object Storage. 7.1.64. .spec.storage.azure Description azure represents configuration that uses Azure Blob Storage. Type object Property Type Description accountName string accountName defines the account to be used by the registry. cloudName string cloudName is the name of the Azure cloud environment to be used by the registry. If empty, the operator will set it based on the infrastructure object. container string container defines Azure's container to be used by registry. networkAccess object networkAccess defines the network access properties for the storage account. Defaults to type: External. 7.1.65. .spec.storage.azure.networkAccess Description networkAccess defines the network access properties for the storage account. Defaults to type: External. Type object Property Type Description internal object internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. type string type is the network access level to be used for the storage account. type: Internal means the storage account will be private, type: External means the storage account will be publicly accessible. Internal storage accounts are only exposed within the cluster's vnet. External storage accounts are publicly exposed on the internet. When type: Internal is used, a vnetName, subNetName and privateEndpointName may optionally be specified. If unspecificed, the image registry operator will discover vnet and subnet names, and generate a privateEndpointName. Defaults to "External". 7.1.66. .spec.storage.azure.networkAccess.internal Description internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. Type object Property Type Description networkResourceGroupName string networkResourceGroupName is the resource group name where the cluster's vnet and subnet are. When omitted, the registry operator will use the cluster resource group (from in the infrastructure status). If you set a networkResourceGroupName on your install-config.yaml, that value will be used automatically (for clusters configured with publish:Internal). Note that both vnet and subnet must be in the same resource group. It must be between 1 and 90 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_), and not end with a period. privateEndpointName string privateEndpointName is the name of the private endpoint for the registry. When provided, the registry will use it as the name of the private endpoint it will create for the storage account. When omitted, the registry will generate one. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. subnetName string subnetName is the name of the subnet the registry operates in. When omitted, the registry operator will discover and set this by using the kubernetes.io_cluster.<cluster-id> tag in the vnet resource, then using one of listed subnets. Advanced cluster network configurations that use network security groups to protect subnets should ensure the provided subnetName has access to Azure Storage service. It must be between 1 and 80 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). vnetName string vnetName is the name of the vnet the registry operates in. When omitted, the registry operator will discover and set this by using the kubernetes.io_cluster.<cluster-id> tag in the vnet resource. This tag is set automatically by the installer. Commonly, this will be the same vnet as the cluster. Advanced cluster network configurations should ensure the provided vnetName is the vnet of the nodes where the image registry pods are running from. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. 7.1.67. .spec.storage.emptyDir Description emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. Type object 7.1.68. .spec.storage.gcs Description gcs represents configuration that uses Google Cloud Storage. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. keyID string keyID is the KMS key ID to use for encryption. Optional, buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. projectID string projectID is the Project ID of the GCP project that this bucket should be associated with. region string region is the GCS location in which your bucket exists. Optional, will be set based on the installed GCS Region. 7.1.69. .spec.storage.ibmcos Description ibmcos represents configuration that uses IBM Cloud Object Storage. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. location string location is the IBM Cloud location in which your bucket exists. Optional, will be set based on the installed IBM Cloud location. resourceGroupName string resourceGroupName is the name of the IBM Cloud resource group that this bucket and its service instance is associated with. Optional, will be set based on the installed IBM Cloud resource group. resourceKeyCRN string resourceKeyCRN is the CRN of the IBM Cloud resource key that is created for the service instance. Commonly referred as a service credential and must contain HMAC type credentials. Optional, will be computed if not provided. serviceInstanceCRN string serviceInstanceCRN is the CRN of the IBM Cloud Object Storage service instance that this bucket is associated with. Optional, will be computed if not provided. 7.1.70. .spec.storage.oss Description Oss represents configuration that uses Alibaba Cloud Object Storage Service. Type object Property Type Description bucket string Bucket is the bucket name in which you want to store the registry's data. About Bucket naming, more details you can look at the [official documentation]( https://www.alibabacloud.com/help/doc-detail/257087.htm ) Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be autogenerated in the form of <clusterid>-image-registry-<region>-<random string 27 chars> encryption object Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation]( https://www.alibabacloud.com/help/doc-detail/117914.htm ) endpointAccessibility string EndpointAccessibility specifies whether the registry use the OSS VPC internal endpoint Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is Internal . region string Region is the Alibaba Cloud Region in which your bucket exists. For a list of regions, you can look at the [official documentation]( https://www.alibabacloud.com/help/doc-detail/31837.html ). Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be based on the installed Alibaba Cloud Region. 7.1.71. .spec.storage.oss.encryption Description Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation]( https://www.alibabacloud.com/help/doc-detail/117914.htm ) Type object Property Type Description kms object KMS (key management service) is an encryption type that holds the struct for KMS KeyID method string Method defines the different encrytion modes available Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is AES256 . 7.1.72. .spec.storage.oss.encryption.kms Description KMS (key management service) is an encryption type that holds the struct for KMS KeyID Type object Required keyID Property Type Description keyID string KeyID holds the KMS encryption key ID 7.1.73. .spec.storage.pvc Description pvc represents configuration that uses a PersistentVolumeClaim. Type object Property Type Description claim string claim defines the Persisent Volume Claim's name to be used. 7.1.74. .spec.storage.s3 Description s3 represents configuration that uses Amazon Simple Storage Service. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. chunkSizeMiB integer chunkSizeMiB defines the size of the multipart upload chunks of the S3 API. The S3 API requires multipart upload chunks to be at least 5MiB. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default value is 10 MiB. The value is an integer number of MiB. The minimum value is 5 and the maximum value is 5120 (5 GiB). cloudFront object cloudFront configures Amazon Cloudfront as the storage middleware in a registry. encrypt boolean encrypt specifies whether the registry stores the image in encrypted format or not. Optional, defaults to false. keyID string keyID is the KMS key ID to use for encryption. Optional, Encrypt must be true, or this parameter is ignored. region string region is the AWS region in which your bucket exists. Optional, will be set based on the installed AWS Region. regionEndpoint string regionEndpoint is the endpoint for S3 compatible storage services. It should be a valid URL with scheme, e.g. https://s3.example.com . Optional, defaults based on the Region that is provided. trustedCA object trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". virtualHostedStyle boolean virtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint Optional, defaults to false. 7.1.75. .spec.storage.s3.cloudFront Description cloudFront configures Amazon Cloudfront as the storage middleware in a registry. Type object Required baseURL keypairID privateKey Property Type Description baseURL string baseURL contains the SCHEME://HOST[/PATH] at which Cloudfront is served. duration string duration is the duration of the Cloudfront session. keypairID string keypairID is key pair ID provided by AWS. privateKey object privateKey points to secret containing the private key, provided by AWS. 7.1.76. .spec.storage.s3.cloudFront.privateKey Description privateKey points to secret containing the private key, provided by AWS. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.77. .spec.storage.s3.trustedCA Description trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". Type object Property Type Description name string name is the metadata.name of the referenced config map. This field must adhere to standard config map naming restrictions. The name must consist solely of alphanumeric characters, hyphens (-) and periods (.). It has a maximum length of 253 characters. If this field is not specified or is empty string, the default trust bundle will be used. 7.1.78. .spec.storage.swift Description swift represents configuration that uses OpenStack Object Storage. Type object Property Type Description authURL string authURL defines the URL for obtaining an authentication token. authVersion string authVersion specifies the OpenStack Auth's version. container string container defines the name of Swift container where to store the registry's data. domain string domain specifies Openstack's domain name for Identity v3 API. domainID string domainID specifies Openstack's domain id for Identity v3 API. regionName string regionName defines Openstack's region in which container exists. tenant string tenant defines Openstack tenant name to be used by registry. tenantID string tenant defines Openstack tenant id to be used by registry. 7.1.79. .spec.tolerations Description tolerations defines the tolerations for the registry pod. Type array 7.1.80. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 7.1.81. .spec.topologySpreadConstraints Description topologySpreadConstraints specify how to spread matching pods among the given topology. Type array 7.1.82. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 7.1.83. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.84. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.85. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.86. .status Description ImageRegistryStatus reports image registry operational status. Type object Required storage storageManaged Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state storage object storage indicates the current applied storage configuration of the registry. storageManaged boolean storageManaged is deprecated, please refer to Storage.managementState version string version is the level this availability applies to 7.1.87. .status.conditions Description conditions is a list of conditions and their status Type array 7.1.88. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 7.1.89. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 7.1.90. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 7.1.91. .status.storage Description storage indicates the current applied storage configuration of the registry. Type object Property Type Description azure object azure represents configuration that uses Azure Blob Storage. emptyDir object emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. gcs object gcs represents configuration that uses Google Cloud Storage. ibmcos object ibmcos represents configuration that uses IBM Cloud Object Storage. managementState string managementState indicates if the operator manages the underlying storage unit. If Managed the operator will remove the storage when this operator gets Removed. oss object Oss represents configuration that uses Alibaba Cloud Object Storage Service. pvc object pvc represents configuration that uses a PersistentVolumeClaim. s3 object s3 represents configuration that uses Amazon Simple Storage Service. swift object swift represents configuration that uses OpenStack Object Storage. 7.1.92. .status.storage.azure Description azure represents configuration that uses Azure Blob Storage. Type object Property Type Description accountName string accountName defines the account to be used by the registry. cloudName string cloudName is the name of the Azure cloud environment to be used by the registry. If empty, the operator will set it based on the infrastructure object. container string container defines Azure's container to be used by registry. networkAccess object networkAccess defines the network access properties for the storage account. Defaults to type: External. 7.1.93. .status.storage.azure.networkAccess Description networkAccess defines the network access properties for the storage account. Defaults to type: External. Type object Property Type Description internal object internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. type string type is the network access level to be used for the storage account. type: Internal means the storage account will be private, type: External means the storage account will be publicly accessible. Internal storage accounts are only exposed within the cluster's vnet. External storage accounts are publicly exposed on the internet. When type: Internal is used, a vnetName, subNetName and privateEndpointName may optionally be specified. If unspecificed, the image registry operator will discover vnet and subnet names, and generate a privateEndpointName. Defaults to "External". 7.1.94. .status.storage.azure.networkAccess.internal Description internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. Type object Property Type Description networkResourceGroupName string networkResourceGroupName is the resource group name where the cluster's vnet and subnet are. When omitted, the registry operator will use the cluster resource group (from in the infrastructure status). If you set a networkResourceGroupName on your install-config.yaml, that value will be used automatically (for clusters configured with publish:Internal). Note that both vnet and subnet must be in the same resource group. It must be between 1 and 90 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_), and not end with a period. privateEndpointName string privateEndpointName is the name of the private endpoint for the registry. When provided, the registry will use it as the name of the private endpoint it will create for the storage account. When omitted, the registry will generate one. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. subnetName string subnetName is the name of the subnet the registry operates in. When omitted, the registry operator will discover and set this by using the kubernetes.io_cluster.<cluster-id> tag in the vnet resource, then using one of listed subnets. Advanced cluster network configurations that use network security groups to protect subnets should ensure the provided subnetName has access to Azure Storage service. It must be between 1 and 80 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). vnetName string vnetName is the name of the vnet the registry operates in. When omitted, the registry operator will discover and set this by using the kubernetes.io_cluster.<cluster-id> tag in the vnet resource. This tag is set automatically by the installer. Commonly, this will be the same vnet as the cluster. Advanced cluster network configurations should ensure the provided vnetName is the vnet of the nodes where the image registry pods are running from. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. 7.1.95. .status.storage.emptyDir Description emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. Type object 7.1.96. .status.storage.gcs Description gcs represents configuration that uses Google Cloud Storage. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. keyID string keyID is the KMS key ID to use for encryption. Optional, buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. projectID string projectID is the Project ID of the GCP project that this bucket should be associated with. region string region is the GCS location in which your bucket exists. Optional, will be set based on the installed GCS Region. 7.1.97. .status.storage.ibmcos Description ibmcos represents configuration that uses IBM Cloud Object Storage. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. location string location is the IBM Cloud location in which your bucket exists. Optional, will be set based on the installed IBM Cloud location. resourceGroupName string resourceGroupName is the name of the IBM Cloud resource group that this bucket and its service instance is associated with. Optional, will be set based on the installed IBM Cloud resource group. resourceKeyCRN string resourceKeyCRN is the CRN of the IBM Cloud resource key that is created for the service instance. Commonly referred as a service credential and must contain HMAC type credentials. Optional, will be computed if not provided. serviceInstanceCRN string serviceInstanceCRN is the CRN of the IBM Cloud Object Storage service instance that this bucket is associated with. Optional, will be computed if not provided. 7.1.98. .status.storage.oss Description Oss represents configuration that uses Alibaba Cloud Object Storage Service. Type object Property Type Description bucket string Bucket is the bucket name in which you want to store the registry's data. About Bucket naming, more details you can look at the [official documentation]( https://www.alibabacloud.com/help/doc-detail/257087.htm ) Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be autogenerated in the form of <clusterid>-image-registry-<region>-<random string 27 chars> encryption object Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation]( https://www.alibabacloud.com/help/doc-detail/117914.htm ) endpointAccessibility string EndpointAccessibility specifies whether the registry use the OSS VPC internal endpoint Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is Internal . region string Region is the Alibaba Cloud Region in which your bucket exists. For a list of regions, you can look at the [official documentation]( https://www.alibabacloud.com/help/doc-detail/31837.html ). Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be based on the installed Alibaba Cloud Region. 7.1.99. .status.storage.oss.encryption Description Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation]( https://www.alibabacloud.com/help/doc-detail/117914.htm ) Type object Property Type Description kms object KMS (key management service) is an encryption type that holds the struct for KMS KeyID method string Method defines the different encrytion modes available Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is AES256 . 7.1.100. .status.storage.oss.encryption.kms Description KMS (key management service) is an encryption type that holds the struct for KMS KeyID Type object Required keyID Property Type Description keyID string KeyID holds the KMS encryption key ID 7.1.101. .status.storage.pvc Description pvc represents configuration that uses a PersistentVolumeClaim. Type object Property Type Description claim string claim defines the Persisent Volume Claim's name to be used. 7.1.102. .status.storage.s3 Description s3 represents configuration that uses Amazon Simple Storage Service. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. chunkSizeMiB integer chunkSizeMiB defines the size of the multipart upload chunks of the S3 API. The S3 API requires multipart upload chunks to be at least 5MiB. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default value is 10 MiB. The value is an integer number of MiB. The minimum value is 5 and the maximum value is 5120 (5 GiB). cloudFront object cloudFront configures Amazon Cloudfront as the storage middleware in a registry. encrypt boolean encrypt specifies whether the registry stores the image in encrypted format or not. Optional, defaults to false. keyID string keyID is the KMS key ID to use for encryption. Optional, Encrypt must be true, or this parameter is ignored. region string region is the AWS region in which your bucket exists. Optional, will be set based on the installed AWS Region. regionEndpoint string regionEndpoint is the endpoint for S3 compatible storage services. It should be a valid URL with scheme, e.g. https://s3.example.com . Optional, defaults based on the Region that is provided. trustedCA object trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". virtualHostedStyle boolean virtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint Optional, defaults to false. 7.1.103. .status.storage.s3.cloudFront Description cloudFront configures Amazon Cloudfront as the storage middleware in a registry. Type object Required baseURL keypairID privateKey Property Type Description baseURL string baseURL contains the SCHEME://HOST[/PATH] at which Cloudfront is served. duration string duration is the duration of the Cloudfront session. keypairID string keypairID is key pair ID provided by AWS. privateKey object privateKey points to secret containing the private key, provided by AWS. 7.1.104. .status.storage.s3.cloudFront.privateKey Description privateKey points to secret containing the private key, provided by AWS. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.105. .status.storage.s3.trustedCA Description trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". Type object Property Type Description name string name is the metadata.name of the referenced config map. This field must adhere to standard config map naming restrictions. The name must consist solely of alphanumeric characters, hyphens (-) and periods (.). It has a maximum length of 253 characters. If this field is not specified or is empty string, the default trust bundle will be used. 7.1.106. .status.storage.swift Description swift represents configuration that uses OpenStack Object Storage. Type object Property Type Description authURL string authURL defines the URL for obtaining an authentication token. authVersion string authVersion specifies the OpenStack Auth's version. container string container defines the name of Swift container where to store the registry's data. domain string domain specifies Openstack's domain name for Identity v3 API. domainID string domainID specifies Openstack's domain id for Identity v3 API. regionName string regionName defines Openstack's region in which container exists. tenant string tenant defines Openstack tenant name to be used by registry. tenantID string tenant defines Openstack tenant id to be used by registry. 7.2. API endpoints The following API endpoints are available: /apis/imageregistry.operator.openshift.io/v1/configs DELETE : delete collection of Config GET : list objects of kind Config POST : create a Config /apis/imageregistry.operator.openshift.io/v1/configs/{name} DELETE : delete a Config GET : read the specified Config PATCH : partially update the specified Config PUT : replace the specified Config /apis/imageregistry.operator.openshift.io/v1/configs/{name}/status GET : read status of the specified Config PATCH : partially update status of the specified Config PUT : replace status of the specified Config 7.2.1. /apis/imageregistry.operator.openshift.io/v1/configs HTTP method DELETE Description delete collection of Config Table 7.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Config Table 7.2. HTTP responses HTTP code Reponse body 200 - OK ConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a Config Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.4. Body parameters Parameter Type Description body Config schema Table 7.5. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 202 - Accepted Config schema 401 - Unauthorized Empty 7.2.2. /apis/imageregistry.operator.openshift.io/v1/configs/{name} Table 7.6. Global path parameters Parameter Type Description name string name of the Config HTTP method DELETE Description delete a Config Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Config Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Config Table 7.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Config Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body Config schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty 7.2.3. /apis/imageregistry.operator.openshift.io/v1/configs/{name}/status Table 7.15. Global path parameters Parameter Type Description name string name of the Config HTTP method GET Description read status of the specified Config Table 7.16. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Config Table 7.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Config Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body Config schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/config-imageregistry-operator-openshift-io-v1
Chapter 130. KafkaMirrorMakerProducerSpec schema reference
Chapter 130. KafkaMirrorMakerProducerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerProducerSpec schema properties Configures a MirrorMaker producer. 130.1. abortOnSendFailure Use the producer.abortOnSendFailure property to configure how to handle message send failure from the producer. By default, if an error occurs when sending a message from Kafka MirrorMaker to a Kafka cluster: The Kafka MirrorMaker container is terminated in OpenShift. The container is then recreated. If the abortOnSendFailure option is set to false , message sending errors are ignored. 130.2. config Use the producer.config properties to configure Kafka options for the producer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Interceptors Properties with the following prefixes cannot be set: bootstrap.servers interceptor.classes sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to MirrorMaker, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the MirrorMaker cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all MirrorMaker nodes. 130.3. KafkaMirrorMakerProducerSpec schema properties Property Property type Description bootstrapServers string A list of host:port pairs for establishing the initial connection to the Kafka cluster. abortOnSendFailure boolean Flag to set the MirrorMaker to exit on a failed send. Default value is true . authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. config map The MirrorMaker producer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). tls ClientTls TLS configuration for connecting MirrorMaker to the cluster.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMakerProducerSpec-reference
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/proc_providing-feedback-on-red-hat-documentation_default
Chapter 1. GFS2 Overview
Chapter 1. GFS2 Overview The Red Hat GFS2 file system is a 64-bit symmetric cluster file system which provides a shared namespace and manages coherency between multiple nodes sharing a common block device. A GFS2 file system is intended to provide a feature set which is as close as possible to a local file system, while at the same time enforcing full cluster coherency between nodes. In a few cases, the Linux file system API does not allow the clustered nature of GFS2 to be totally transparent; for example, programs using Posix locks in GFS2 should avoid using the GETLK function since, in a clustered environment, the process ID may be for a different node in the cluster. In most cases however, the functionality of a GFS2 file system is identical to that of a local file system. The Red Hat Enterprise Linux (RHEL) Resilient Storage Add-On provides GFS2, and it depends on the RHEL High Availability Add-On to provide the cluster management required by GFS2. For information about the High Availability Add-On see Configuring and Managing a Red Hat Cluster . The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes. To get the best performance from GFS2, it is important to take into account the performance considerations which stem from the underlying design. Just like a local file system, GFS2 relies on the page cache in order to improve performance by local caching of frequently used data. In order to maintain coherency across the nodes in the cluster, cache control is provided by the glock state machine. For more information on glocks and their performance implications, see Section 2.9, "GFS2 Node Locking" . This chapter provides some basic, abbreviated information as background to help you understand GFS2. 1.1. GFS2 Support Limits Table 1.1, "GFS2 Support Limits" summarizes the current maximum file system size and number of nodes that GFS2 supports. Table 1.1. GFS2 Support Limits Maximum number of node 16 (x86, Power8 on PowerVM) 4 (s390x under z/VM) Maximum file system size 100TB on all supported architectures GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. If your system requires larger GFS2 file systems than are currently supported, contact your Red Hat service representative. Note Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the Red Hat Enterprise Linux 7 release Red Hat does not support the use of GFS2 as a single-node file system. Red Hat does support a number of high-performance single node file systems which are optimized for single node and thus have generally lower overhead than a cluster file system. Red Hat recommends using these file systems in preference to GFS2 in cases where only a single node needs to mount the file system. Red Hat will continue to support single-node GFS2 file systems for mounting snapshots of cluster file systems (for example, for backup purposes). When determining the size of your file system, you should consider your recovery needs. Running the fsck.gfs2 command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk subsystem failure, recovery time is limited by the speed of your backup media. For information on the amount of memory the fsck.gfs2 command requires, see Section 3.10, "Repairing a GFS2 File System" . While a GFS2 file system may be used outside of LVM, Red Hat supports only GFS2 file systems that are created on a CLVM logical volume. CLVM is included in the Resilient Storage Add-On. It is a cluster-wide implementation of LVM, enabled by the CLVM daemon clvmd , which manages LVM logical volumes in a cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. For information on the LVM volume manager, see Logical Volume Manager Administration . Note When you configure a GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared storage. Asymmetric cluster configurations in which some nodes have access to the shared storage and others do not are not supported. This does not require that all nodes actually mount the GFS2 file system itself.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/ch-overview-gfs2
6.4. Install Signed Packages from Well Known Repositories
6.4. Install Signed Packages from Well Known Repositories Software packages are published through repositories. All well known repositories support package signing. Package signing uses public key technology to prove that the package that was published by the repository has not been changed since the signature was applied. This provides some protection against installing software that may have been maliciously altered after the package was created but before you downloaded it. Using too many repositories, untrustworthy repositories, or repositories with unsigned packages has a higher risk of introducing malicious or vulnerable code into your system. Use caution when adding repositories to yum/software update.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-software_maintenance-install_signed_packages_from_well_known_repositories
2.5.2. DLM Tuning Options: Increase DLM Table Sizes
2.5.2. DLM Tuning Options: Increase DLM Table Sizes DLM uses several tables to manage, coordinate, and pass lock information between nodes in the cluster. Increasing the size of the DLM tables might increase performance. In Red Hat Enterprise Linux 6.1 and later, the default sizes of these tables have been increased, but you can manually increase them with the following commands: These commands are not persistent and will not survive a reboot, so you must add them to one of the startup scripts and you must execute them before mounting any GFS2 file systems, or the changes will be silently ignored. For more detailed information on GFS2 node locking, see Section 2.9, "GFS2 Node Locking" .
[ "echo 1024 > /sys/kernel/config/dlm/cluster/lkbtbl_size echo 1024 > /sys/kernel/config/dlm/cluster/rsbtbl_size echo 1024 > /sys/kernel/config/dlm/cluster/dirtbl_size" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s2-dlmtablesize-gfs2
Installing on Alibaba
Installing on Alibaba OpenShift Container Platform 4.13 Installing OpenShift Container Platform on Alibaba Cloud Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_alibaba/index
Chapter 11. Deploying machine health checks
Chapter 11. Deploying machine health checks You can configure and deploy a machine health check to automatically repair damaged machines in a machine pool. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 11.1. About machine health checks Machine health checks automatically repair unhealthy machines in a particular machine pool. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. Note You cannot apply a machine health check to a machine with the master role. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 11.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. Control plane machines are not currently supported and are not remediated if they are unhealthy. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . Additional resources For more information about the node conditions you can define in a MachineHealthCheck CR, see About listing all the nodes in a cluster . For more information about short-circuiting, see Short-circuiting machine health check remediation . 11.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 11.2.1. Short-circuiting machine health check remediation Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 11.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 11.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 11.3. Creating a MachineHealthCheck resource You can create a MachineHealthCheck resource for all MachineSets in your cluster. You should not create a MachineHealthCheck resource that targets control plane machines. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml You can configure and deploy a machine health check to detect and repair unhealthy bare metal nodes. 11.4. About power-based remediation of bare metal In a bare metal cluster, remediation of nodes is critical to ensuring the overall health of the cluster. Physically remediating a cluster can be challenging and any delay in putting the machine into a safe or an operational state increases the time the cluster remains in a degraded state, and the risk that subsequent failures might bring the cluster offline. Power-based remediation helps counter such challenges. Instead of reprovisioning the nodes, power-based remediation uses a power controller to power off an inoperable node. This type of remediation is also called power fencing. OpenShift Container Platform uses the MachineHealthCheck controller to detect faulty bare metal nodes. Power-based remediation is fast and reboots faulty nodes instead of removing them from the cluster. Power-based remediation provides the following capabilities: Allows the recovery of control plane nodes Reduces the risk data loss in hyperconverged environments Reduces the downtime associated with recovering physical machines 11.4.1. MachineHealthChecks on bare metal Machine deletion on bare metal cluster triggers reprovisioning of a bare metal host. Usually bare metal reprovisioning is a lengthy process, during which the cluster is missing compute resources and applications might be interrupted. To change the default remediation process from machine deletion to host power-cycle, annotate the MachineHealthCheck resource with the machine.openshift.io/remediation-strategy: external-baremetal annotation. After you set the annotation, unhealthy machines are power-cycled by using BMC credentials. 11.4.2. Understanding the remediation process The remediation process operates as follows: The MachineHealthCheck (MHC) controller detects that a node is unhealthy. The MHC notifies the bare metal machine controller which requests to power-off the unhealthy node. After the power is off, the node is deleted, which allows the cluster to reschedule the affected workload on other nodes. The bare metal machine controller requests to power on the node. After the node is up, the node re-registers itself with the cluster, resulting in the creation of a new node. After the node is recreated, the bare metal machine controller restores the annotations and labels that existed on the unhealthy node before its deletion. Note If the power operations did not complete, the bare metal machine controller triggers the reprovisioning of the unhealthy node unless this is a control plane node or a node that was provisioned externally. 11.4.3. Creating a MachineHealthCheck resource for bare metal Prerequisites The OpenShift Container Platform is installed using installer-provisioned infrastructure (IPI). Access to Baseboard Management Controller (BMC) credentials (or BMC access to each node) Network access to the BMC interface of the unhealthy node. Procedure Create a healthcheck.yaml file that contains the definition of your machine health check. Apply the healthcheck.yaml file to your cluster using the following command: USD oc apply -f healthcheck.yaml Sample MachineHealthCheck resource for bare metal apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: "Ready" timeout: "300s" 6 status: "False" - type: "Ready" timeout: "300s" 7 status: "Unknown" maxUnhealthy: "40%" 8 nodeStartupTimeout: "10m" 9 1 Specify the name of the machine health check to deploy. 2 For bare metal clusters, you must include the machine.openshift.io/remediation-strategy: external-baremetal annotation in the annotations section to enable power-cycle remediation. With this remediation strategy, unhealthy hosts are rebooted instead of removed from the cluster. 3 4 Specify a label for the machine pool that you want to check. 5 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 6 7 Specify the timeout duration for the node condition. If the condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 8 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 9 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. <mgmt-troubleshooting-issue-power-remediation_deploying-machine-health-checks> <title>Troubleshooting issues with power-based remediation</title> To troubleshoot an issue with power-based remediation, verify the following: You have access to the BMC. BMC is connected to the control plane node that is responsible for running the remediation task. </mgmt-troubleshooting-issue-power-remediation_deploying-machine-health-checks>
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc apply -f healthcheck.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/machine_management/deploying-machine-health-checks
function::isdigit
function::isdigit Name function::isdigit - Checks for a digit. Synopsis Arguments str String to check. General Syntax isdigit:long(str:string) Description Checks for a digit (0 through 9) as the first character of a string. Returns non-zero if true, and a zero if false.
[ "function isdigit:long(str:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-isdigit
Chapter 2. Software
Chapter 2. Software The Red Hat OpenStack Platform (RHOSP) IaaS cloud is implemented as a collection of interacting services that control compute, storage, and networking resources. To manage the cloud, administrators can use a web-based dashboard or command-line clients to control, provision, and automate OpenStack resources. RHOSP also has an extensive API that is available to all cloud users. The following diagram provides a high-level overview of the RHOSP core services and their relationship with each other. Figure 2.1. RHOSP core services and their relationships The following table describes each component in the diagram and provides links for the component documentation section. Table 2.1. Core services Service Code Description 1 Dashboard horizon Web browser-based dashboard that you use to manage OpenStack services. 2 Identity keystone Centralized service for authentication and authorization of OpenStack services and for managing users, projects, and roles. 3 Networking neutron Provides connectivity between the interfaces of OpenStack services. 4 Block Storage cinder Manages persistent block storage volumes for virtual machines. 5 Compute nova Manages and provisions virtual machines running on hypervisor nodes. 6 Shared File Systems manila Provisions shared file systems that multiple compute instances, bare metal nodes, or containers can consume. 7 Image glance Registry service that you use to store resources such as virtual machine images and volume snapshots. 8 Object Storage swift Allows users to store and retrieve files and arbitrary data. 9 Telemetry ceilometer Provides measurements of cloud resources. 10 Load-balancing octavia Provides load balancing services for the cloud. 11 Orchestration heat Template-based orchestration engine that supports automatic creation of resource stacks. 12 Key Manager barbican REST API designed for the secure storage, provisioning and management of secrets. Each OpenStack service contains a functional group of Linux services and other components. 2.1. Components This section describes each of the OpenStack components: OpenStack Dashboard service (horizon) OpenStack Dashboard service provides a graphical user interface for users and administrators to create and launch instances, manage networking, and set access control. The Dashboard service provides the Project, Admin, and Settings default dashboards. The modular design enables the dashboard to interface with other products such as billing, monitoring, and additional management tools. OpenStack Identity service (keystone) OpenStack Identity service provides user authentication and authorization to all OpenStack components. Identity service supports multiple authentication mechanisms, including user name and password credentials, token-based systems, and AWS-style log-ins. OpenStack Networking service (neutron) OpenStack Networking service handles creation and management of a virtual networking infrastructure in the OpenStack cloud. Infrastructure elements include networks, subnets, and routers. OpenStack Block Storage service (cinder) OpenStack Block Storage service provides persistent block storage management for virtual hard drives. With Block Storage, users can create and delete block devices, and manage attachment of block devices to servers. OpenStack Compute service (nova) OpenStack Compute service serves as the core of the RHOSP cloud by providing and managing virtual machine instances on demand. The Compute service abstracts the underlying hardware and interacts with other RHOSP services to create and provision instances in a RHOSP cloud. OpenStack Shared File Systems service (manila) OpenStack Shared File Systems service provides shared file systems that Compute instances can use. The basic resources offered by the Shared File Systems are shares, snapshots, and share networks. OpenStack Image service (glance) OpenStack Image service is a registry for virtual disk images. Users can add new images or take a snapshot of an existing server for immediate storage. You can use the snapshots for backup or as templates for new servers. OpenStack Object Storage service (swift) Object Storage service provides an HTTP-accessible storage system for large amounts of data, including static entities such as videos, images, email messages, files, or VM images. Objects are stored as binaries on the underlying file system with metadata stored in the extended attributes of each file. OpenStack Telemetry service (ceilometer) OpenStack Telemetry service provides user-level usage data for RHOSP-based clouds. You can use the data for customer billing, system monitoring, or alerts. Telemetry can collect data from notifications sent by existing OpenStack components such as Compute usage events, or by polling RHOSP infrastructure resources such as libvirt. OpenStack Load-balancing service (octavia) OpenStack Load-balancing service provides a Load Balancing-as-a-Service (LBaaS) implementation that supports multiple provider drivers. The reference provider driver (Amphora provider driver) is an open-source, scalable, and highly available load balancing provider. It accomplishes its delivery of load balancing services by managing a fleet of virtual machines, collectively known as amphorae, which it creates on demand. OpenStack Orchestration service (heat) OpenStack Orchestration service provides templates to create and manage cloud resources such as storage, networking, instances, or applications. Use templates to create stacks, which are collections of resources. OpenStack Bare Metal Provisioning service (ironic) OpenStack Bare Metal Provisioning service supports physical machines for a variety of hardware vendors with hardware-specific drivers. Bare Metal Provisioning integrates with the Compute service to provision physical machines in the same way that virtual machines are provisioned, and provides a solution for the bare-metal-to-trusted-project use case. OpenStack DNS-as-a-Service (designate) Note This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see the Scope of Coverage Details . DNSaaS includes a REST API for domain and record management. It is is multi-tenanted and integrates with OpenStack Identity Service (keystone) for authentication. DNSaaS includes a framework for integration with Compute (nova) and OpenStack Networking (neutron) notifications, allowing auto-generated DNS records. DNSaaS includes integration support for PowerDNS and Bind9. OpenStack Key Manager service (barbican) OpenStack Key Manager Service is a REST API designed for the secure storage, provisioning and management of secrets such as passwords, encryption keys, and X.509 Certificates. This includes keying material such as Symmetric Keys, Asymmetric Keys, Certificates, and raw binary data. Red Hat OpenStack Platform director Red Hat OpenStack Platform (RHOSP) director is a toolset for installing and managing a complete RHOSP environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation for OpenStack-On-OpenStack. This project uses OpenStack components to install a fully operational RHOSP environment. It includes new OpenStack components that provision and control bare metal systems to use as OpenStack nodes. It provides a simple method for installing a complete RHOSP environment. RHOSP director uses two main concepts: an undercloud and an overcloud. The undercloud installs and configures the overcloud. OpenStack High Availability To keep your Red Hat OpenStack Platform (RHOSP) environment up and running efficiently, use RHOSP director to create configurations that offer high availability and load balancing across all major services in RHOSP. OpenStack Operational Tools Red Hat OpenStack Platform comes with an optional suite of tools, such as Centralized Logging, Availability Monitoring, and Performance Monitoring. These tools help you maintain your OpenStack environment. 2.2. Integration You can integrate Red Hat OpenStack Platform (RHOSP) with the following third-party software - Tested and Approved Software 2.3. Installation summary Red Hat supports the following methods to install Red Hat OpenStack Platform (RHOSP): Red Hat OpenStack Platform director : RHOSP director is recommended for enterprise deployments. RHOSP director is a toolset for installing and managing a complete RHOSP environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation for "OpenStack-On-OpenStack". This project takes advantage of OpenStack components to install a fully operational RHOSP environment. It includes new OpenStack components that provision and control bare metal systems to use as OpenStack nodes. It provides a simple method for installing a complete RHOSP environment. RHOSP director uses two main concepts: an undercloud and an overcloud. The undercloud installs and configures the overcloud. For more information, see Red Hat OpenStack Platform Director Installation and Usage . packstack : Packstack is an OpenStack deployment that consists of a public network and a private network on one machine, hosting one CirrOS-image instance, with an attached storage volume. Installed OpenStack services include: Block Storage, Compute, Dashboard, Identity, Image, Networking, Object Storage, and Telemetry. Packstack is a command-line utility that rapidly deploys OpenStack. Note Packstack deployments are intended only for POC-type testing environments and are not suitable for production. By default, the public network is only routable from the OpenStack host. For more information, see Evaluating OpenStack: Single-Node Deployment . See Installing and Managing Red Hat OpenStack Platform for a comparison of these installation options. 2.4. Subscriptions To install Red Hat OpenStack Platform (RHOSP), you must register all systems in the OpenStack environment with Red Hat Subscription Manager, and subscribe to the required channels. For more information about the channels and repositories to deploy RHOSP, see the following guides: Requirements for installing using director in the Director Installation and Usage guide. Requirements for installing a single-node POC deployment
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/product_guide/ch-rhosp-software
Chapter 6. Kafka Connect configuration properties
Chapter 6. Kafka Connect configuration properties config.storage.topic Type: string Importance: high The name of the Kafka topic where connector configurations are stored. group.id Type: string Importance: high A unique string that identifies the Connect cluster group this worker belongs to. key.converter Type: class Importance: high Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. offset.storage.topic Type: string Importance: high The name of the Kafka topic where source connector offsets are stored. status.storage.topic Type: string Importance: high The name of the Kafka topic where connector and task status are stored. value.converter Type: class Importance: high Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. bootstrap.servers Type: list Default: localhost:9092 Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). exactly.once.source.support Type: string Default: disabled Valid Values: (case insensitive) [DISABLED, ENABLED, PREPARING] Importance: high Whether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and their source offsets, and by proactively fencing out old task generations before bringing up new ones. To enable exactly-once source support on a new cluster, set this property to 'enabled'. To enable support on an existing cluster, first set to 'preparing' on every worker in the cluster, then set to 'enabled'. A rolling upgrade may be used for both changes. For more information on this feature, see the exactly-once source support documentation . heartbeat.interval.ms Type: int Default: 3000 (3 seconds) Importance: high The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. rebalance.timeout.ms Type: int Default: 60000 (1 minute) Importance: high The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures. session.timeout.ms Type: int Default: 10000 (10 seconds) Importance: high The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms . ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. connector.client.config.override.policy Type: string Default: All Importance: medium Class name or alias of implementation of ConnectorClientConfigOverridePolicy . Defines what client configurations can be overridden by the connector. The default implementation is All , meaning connector configurations can override all client properties. The other possible policies in the framework include None to disallow connectors from overriding client properties, and Principal to allow connectors to override only client principals. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 40000 (40 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Valid Values: [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. worker.sync.timeout.ms Type: int Default: 3000 (3 seconds) Importance: medium When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining. worker.unsync.backoff.ms Type: int Default: 300000 (5 minutes) Importance: medium When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining. access.control.allow.methods Type: string Default: "" Importance: low Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD. access.control.allow.origin Type: string Default: "" Importance: low Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API. admin.listeners Type: list Default: null Valid Values: List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443 . Importance: low List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property). auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. client.id Type: string Default: "" Importance: low An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. config.providers Type: list Default: "" Importance: low Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvider allows you to replace variable references in connector configurations, such as for externalized secrets. config.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the configuration storage topic. connect.protocol Type: string Default: sessioned Valid Values: [eager, compatible, sessioned] Importance: low Compatibility mode for Kafka Connect Protocol. header.converter Type: class Default: org.apache.kafka.connect.storage.SimpleHeaderConverter Importance: low HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas. inter.worker.key.generation.algorithm Type: string Default: HmacSHA256 Valid Values: Any KeyGenerator algorithm supported by the worker JVM Importance: low The algorithm to use for generating internal request keys. The algorithm 'HmacSHA256' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. inter.worker.key.size Type: int Default: null Importance: low The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used. inter.worker.key.ttl.ms Type: int Default: 3600000 (1 hour) Valid Values: [0,... ,2147483647] Importance: low The TTL of generated session keys used for internal request validation (in milliseconds). inter.worker.signature.algorithm Type: string Default: HmacSHA256 Valid Values: Any MAC algorithm supported by the worker JVM Importance: low The algorithm used to sign internal requestsThe algorithm 'inter.worker.signature.algorithm' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. inter.worker.verification.algorithms Type: list Default: HmacSHA256 Valid Values: A list of one or more MAC algorithms, each supported by the worker JVM Importance: low A list of permitted algorithms for verifying internal requests, which must include the algorithm used for the inter.worker.signature.algorithm property. The algorithm(s) '[HmacSHA256]' will be used as a default on JVMs that provide them; on other JVMs, no default is used and a value for this property must be manually specified in the worker config. listeners Type: list Default: http://:8083 Valid Values: List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443 . Importance: low List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. offset.flush.interval.ms Type: long Default: 60000 (1 minute) Importance: low Interval at which to try committing offsets for tasks. offset.flush.timeout.ms Type: long Default: 5000 (5 seconds) Importance: low Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. This property has no effect for source connectors running with exactly-once support. offset.storage.partitions Type: int Default: 25 Valid Values: Positive number, or -1 to use the broker's default Importance: low The number of partitions used when creating the offset storage topic. offset.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the offset storage topic. plugin.path Type: list Default: null Importance: low List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: a) directories immediately containing jars with plugins and their dependencies b) uber-jars with plugins and their dependencies c) directories immediately containing the package directory structure of classes of plugins and their dependencies Note: symlinks will be followed to discover dependencies or plugins. Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors Do not use config provider variables in this property, since the raw path is used by the worker's scanner before config providers are initialized and used to replace variables. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. response.http.headers.config Type: string Default: "" Valid Values: Comma-separated header rules, where each header rule is of the form '[action] [header name]:[header value]' and optionally surrounded by double quotes if any part of a header rule contains a comma Importance: low Rules for REST API HTTP response headers. rest.advertised.host.name Type: string Default: null Importance: low If this is set, this is the hostname that will be given out to other workers to connect to. rest.advertised.listener Type: string Default: null Importance: low Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use. rest.advertised.port Type: int Default: null Importance: low If this is set, this is the port that will be given out to other workers to connect to. rest.extension.classes Type: list Default: "" Importance: low Comma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing the interface ConnectRestExtension allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. scheduled.rebalance.max.delay.ms Type: int Default: 300000 (5 minutes) Valid Values: [0,... ,2147483647] Importance: low The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Valid Values: [0,... ] Importance: low The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.client.auth Type: string Default: none Valid Values: [required, requested, none] Importance: low Configures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. status.storage.partitions Type: int Default: 5 Valid Values: Positive number, or -1 to use the broker's default Importance: low The number of partitions used when creating the status storage topic. status.storage.replication.factor Type: short Default: 3 Valid Values: Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default Importance: low Replication factor used when creating the status storage topic. task.shutdown.graceful.timeout.ms Type: long Default: 5000 (5 seconds) Importance: low Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially. topic.creation.enable Type: boolean Default: true Importance: low Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with topic.creation. properties. Each task will use an admin client to create its topics and will not depend on the Kafka brokers to create topics automatically. topic.tracking.allow.reset Type: boolean Default: true Importance: low If set to true, it allows user requests to reset the set of active topics per connector. topic.tracking.enable Type: boolean Default: true Importance: low Enable tracking the set of active topics per connector during runtime.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/kafka_configuration_properties/kafka-connect-configuration-properties-str
Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation
Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads Deployments . Click Workloads Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/backing-openshift-container-platform-applications-with-openshift-data-foundation_osp
Chapter 5. Configuring the Network Observability Operator
Chapter 5. Configuring the Network Observability Operator You can update the FlowCollector API resource to configure the Network Observability Operator and its managed components. The FlowCollector is explicitly created during installation. Since this resource operates cluster-wide, only a single FlowCollector is allowed, and it must be named cluster . For more information, see the FlowCollector API reference . 5.1. View the FlowCollector resource You can view and edit YAML directly in the OpenShift Container Platform web console. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. There, you can modify the FlowCollector resource to configure the Network Observability operator. The following example shows a sample FlowCollector resource for OpenShift Container Platform Network Observability operator: Sample FlowCollector resource apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: 3 logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi logTypes: Flows advanced: conversationEndTimeout: 10s conversationHeartbeatInterval: 30s loki: 4 mode: LokiStack 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: "3100": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service' 1 The Agent specification, spec.agent.type , must be EBPF . eBPF is the only OpenShift Container Platform supported option. 2 You can set the Sampling specification, spec.agent.ebpf.sampling , to manage resources. Lower sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. The lower the value, the increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. It is recommend to start with default values and refine empirically, to determine which setting your cluster can manage. 3 The Processor specification spec.processor. can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The spec.processor.logTypes value is Flows . The spec.processor.advanced values are Conversations , EndedConversations , or ALL . Storage requirements are highest for All and lowest for EndedConversations . 4 The Loki specification, spec.loki , specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install. 5 The LokiStack mode automatically sets a few configurations: querierUrl , ingesterUrl and statusUrl , tenantID , and corresponding TLS configuration. Cluster roles and a cluster role binding are created for reading and writing logs to Loki. And authToken is set to Forward . You can set these manually using the Manual mode. 6 The spec.quickFilters specification defines filters that show up in the web console. The Application filter keys, src_namespace and dst_namespace , are negated ( ! ), so the Application filter shows all traffic that does not originate from, or have a destination to, any openshift- or netobserv namespaces. For more information, see Configuring quick filters below. Additional resources FlowCollector API reference Working with conversation tracking 5.2. Configuring the Flow Collector resource with Kafka You can configure the FlowCollector resource to use Kafka for high-throughput and low-latency data feeds. A Kafka instance needs to be running, and a Kafka topic dedicated to OpenShift Container Platform Network Observability must be created in that instance. For more information, see Kafka documentation with AMQ Streams . Prerequisites Kafka is installed. Red Hat supports Kafka with AMQ Streams Operator. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the Network Observability Operator, select Flow Collector . Select the cluster and then click the YAML tab. Modify the FlowCollector resource for OpenShift Container Platform Network Observability Operator to use Kafka, as shown in the following sample YAML: Sample Kafka configuration in FlowCollector resource apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: deploymentModel: Kafka 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" 2 topic: network-flows 3 tls: enable: false 4 1 Set spec.deploymentModel to Kafka instead of Direct to enable the Kafka deployment model. 2 spec.kafka.address refers to the Kafka bootstrap server address. You can specify a port if needed, for instance kafka-cluster-kafka-bootstrap.netobserv:9093 for using TLS on port 9093. 3 spec.kafka.topic should match the name of a topic created in Kafka. 4 spec.kafka.tls can be used to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the flowlogs-pipeline processor component is deployed (default: netobserv ) and where the eBPF agents are deployed (default: netobserv-privileged ). It must be referenced with spec.kafka.tls.caCert . When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with spec.kafka.tls.userCert . 5.3. Export enriched network flow data You can send network flows to Kafka, IPFIX, the Red Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red Hat build of OpenTelemetry, Jaeger, or Prometheus. Prerequisites Your Kafka, IPFIX, or OpenTelemetry collector endpoints are available from Network Observability flowlogs-pipeline pods. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Edit the FlowCollector to configure spec.exporters as follows: apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: exporters: - type: Kafka 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: "ipfix-collector.ipfix.svc.cluster.local" targetPort: 4739 transport: tcp or udp 5 - type: OpenTelemetry 6 openTelemetry: targetHost: my-otelcol-collector-headless.otlp.svc targetPort: 4317 type: grpc 7 logs: 8 enable: true metrics: 9 enable: true prefix: netobserv pushTimeInterval: 20s 10 expiryTime: 2m # fieldsMapping: 11 # input: SrcAddr # output: source.address 1 4 6 You can export flows to IPFIX, OpenTelemetry, and Kafka individually or concurrently. 2 The Network Observability Operator exports all flows to the configured Kafka topic. 3 You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the flowlogs-pipeline processor component is deployed (default: netobserv). It must be referenced with spec.exporters.tls.caCert . When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with spec.exporters.tls.userCert . 5 You have the option to specify transport. The default value is tcp but you can also specify udp . 7 The protocol of OpenTelemetry connection. The available options are http and grpc . 8 OpenTelemetry configuration for exporting logs, which are the same as the logs created for Loki. 9 OpenTelemetry configuration for exporting metrics, which are the same as the metrics created for Prometheus. These configurations are specified in the spec.processor.metrics.includeList parameter of the FlowCollector custom resource, along with any custom metrics you defined using the FlowMetrics custom resource. 10 The time interval that metrics are sent to the OpenTelemetry collector. 11 Optional :Network Observability network flows formats get automatically renamed to an OpenTelemetry compliant format. The fieldsMapping specification gives you the ability to customize the OpenTelemetry format output. For example in the YAML sample, SrcAddr is the Network Observability input field, and it is being renamed source.address in OpenTelemetry output. You can see both Network Observability and OpenTelemetry formats in the "Network flows format reference". After configuration, network flows data can be sent to an available output in a JSON format. For more information, see "Network flows format reference". Additional resources Network flows format reference . 5.4. Updating the Flow Collector resource As an alternative to editing YAML in the OpenShift Container Platform web console, you can configure specifications, such as eBPF sampling, by patching the flowcollector custom resource (CR): Procedure Run the following command to patch the flowcollector CR and update the spec.agent.ebpf.sampling value: USD oc patch flowcollector cluster --type=json -p "[{"op": "replace", "path": "/spec/agent/ebpf/sampling", "value": <new value>}] -n netobserv" 5.5. Configuring quick filters You can modify the filters in the FlowCollector resource. Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample FlowCollector resource for more context about modifying the YAML. Note The filter matching types "all of" or "any of" is a UI setting that the users can modify from the query options. It is not part of this resource configuration. Here is a list of all available filter keys: Table 5.1. Filter keys Universal* Source Destination Description namespace src_namespace dst_namespace Filter traffic related to a specific namespace. name src_name dst_name Filter traffic related to a given leaf resource name, such as a specific pod, service, or node (for host-network traffic). kind src_kind dst_kind Filter traffic related to a given resource kind. The resource kinds include the leaf resource (Pod, Service or Node), or the owner resource (Deployment and StatefulSet). owner_name src_owner_name dst_owner_name Filter traffic related to a given resource owner; that is, a workload or a set of pods. For example, it can be a Deployment name, a StatefulSet name, etc. resource src_resource dst_resource Filter traffic related to a specific resource that is denoted by its canonical name, that identifies it uniquely. The canonical notation is kind.namespace.name for namespaced kinds, or node.name for nodes. For example, Deployment.my-namespace.my-web-server . address src_address dst_address Filter traffic related to an IP address. IPv4 and IPv6 are supported. CIDR ranges are also supported. mac src_mac dst_mac Filter traffic related to a MAC address. port src_port dst_port Filter traffic related to a specific port. host_address src_host_address dst_host_address Filter traffic related to the host IP address where the pods are running. protocol N/A N/A Filter traffic related to a protocol, such as TCP or UDP. Universal keys filter for any of source or destination. For example, filtering name: 'my-pod' means all traffic from my-pod and all traffic to my-pod , regardless of the matching type used, whether Match all or Match any . 5.6. Resource management and performance considerations The amount of resources required by Network Observability depends on the size of your cluster and your requirements for the cluster to ingest and store observability data. To manage resources and set performance criteria for your cluster, consider configuring the following settings. Configuring these settings might meet your optimal setup and observability needs. The following settings can help you manage resources and performance from the outset: eBPF Sampling You can set the Sampling specification, spec.agent.ebpf.sampling , to manage resources. Smaller sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. Smaller values result in an increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. Consider starting with the default values and refine empirically, in order to determine which setting your cluster can manage. eBPF features The more features that are enabled, the more CPU and memory are impacted. See "Observing the network traffic" for a complete list of these features. Without Loki You can reduce the amount of resources that Network Observability requires by not using Loki and instead relying on Prometheus. For example, when Network Observability is configured without Loki, the total savings of memory usage are in the 20-65% range and CPU utilization is lower by 10-30%, depending upon the sampling value. See "Network Observability without Loki" for more information. Restricting or excluding interfaces Reduce the overall observed traffic by setting the values for spec.agent.ebpf.interfaces and spec.agent.ebpf.excludeInterfaces . By default, the agent fetches all the interfaces in the system, except the ones listed in excludeInterfaces and lo (local interface). Note that the interface names might vary according to the Container Network Interface (CNI) used. Performance fine-tuning The following settings can be used to fine-tune performance after the Network Observability has been running for a while: Resource requirements and limits : Adapt the resource requirements and limits to the load and memory usage you expect on your cluster by using the spec.agent.ebpf.resources and spec.processor.resources specifications. The default limits of 800MB might be sufficient for most medium-sized clusters. Cache max flows timeout : Control how often flows are reported by the agents by using the eBPF agent's spec.agent.ebpf.cacheMaxFlows and spec.agent.ebpf.cacheActiveTimeout specifications. A larger value results in less traffic being generated by the agents, which correlates with a lower CPU load. However, a larger value leads to a slightly higher memory consumption, and might generate more latency in the flow collection. 5.6.1. Resource considerations The following table outlines examples of resource considerations for clusters with certain workload sizes. Important The examples outlined in the table demonstrate scenarios that are tailored to specific workloads. Consider each example only as a baseline from which adjustments can be made to accommodate your workload needs. Table 5.2. Resource recommendations Extra small (10 nodes) Small (25 nodes) Large (250 nodes) [2] Worker Node vCPU and memory 4 vCPUs| 16GiB mem [1] 16 vCPUs| 64GiB mem [1] 16 vCPUs| 64GiB Mem [1] LokiStack size 1x.extra-small 1x.small 1x.medium Network Observability controller memory limit 400Mi (default) 400Mi (default) 400Mi (default) eBPF sampling rate 50 (default) 50 (default) 50 (default) eBPF memory limit 800Mi (default) 800Mi (default) 1600Mi cacheMaxSize 50,000 100,000 (default) 100,000 (default) FLP memory limit 800Mi (default) 800Mi (default) 800Mi (default) FLP Kafka partitions - 48 48 Kafka consumer replicas - 6 18 Kafka brokers - 3 (default) 3 (default) Tested with AWS M6i instances. In addition to this worker and its controller, 3 infra nodes (size M6i.12xlarge ) and 1 workload node (size M6i.8xlarge ) were tested. 5.6.2. Total average memory and CPU usage The following table outlines averages of total resource usage for clusters with a sampling value of 1 and 50 for two different tests: Test 1 and Test 2 . The tests differ in the following ways: Test 1 takes into account high ingress traffic volume in addition to the total number of namespace, pods and services in an OpenShift Container Platform cluster, places load on the eBPF agent, and represents use cases with a high number of workloads for a given cluster size. For example, Test 1 consists of 76 Namespaces, 5153 Pods, and 2305 Services with a network traffic scale of ~350 MB/s. Test 2 takes into account high ingress traffic volume in addition to the total number of namespace, pods and services in an OpenShift Container Platform cluster and represents use cases with a high number of workloads for a given cluster size. For example, Test 2 consists of 553 Namespaces, 6998 Pods, and 2508 Services with a network traffic scale of ~950 MB/s. Since different types of cluster use cases are exemplified in the different tests, the numbers in this table do not scale linearly when compared side-by-side. Instead, they are intended to be used as a benchmark for evaluating your personal cluster usage. The examples outlined in the table demonstrate scenarios that are tailored to specific workloads. Consider each example only as a baseline from which adjustments can be made to accommodate your workload needs. Note Metrics exported to Prometheus can impact the resource usage. Cardinality values for the metrics can help determine how much resources are impacted. For more information, see "Network Flows format" in the Additional resources section. Table 5.3. Total average resource usage Sampling value Resources used Test 1 (25 nodes) Test 2 (250 nodes) Sampling = 50 Total NetObserv CPU Usage 1.35 5.39 Total NetObserv RSS (Memory) Usage 16 GB 63 GB Sampling = 1 Total NetObserv CPU Usage 1.82 11.99 Total NetObserv RSS (Memory) Usage 22 GB 87 GB Summary: This table shows average total resource usage of Network Observability, which includes Agents, FLP, Kafka, and Loki with all features enabled. For details about what features are enabled, see the features covered in "Observing the network traffic", which comprises all the features that are enabled for this testing. Additional resources Observing the network traffic from the traffic flows view Network Observability without Loki Network Flows format reference
[ "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: 3 logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi logTypes: Flows advanced: conversationEndTimeout: 10s conversationHeartbeatInterval: 30s loki: 4 mode: LokiStack 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: \"3100\": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: deploymentModel: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" 2 topic: network-flows 3 tls: enable: false 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: exporters: - type: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: \"ipfix-collector.ipfix.svc.cluster.local\" targetPort: 4739 transport: tcp or udp 5 - type: OpenTelemetry 6 openTelemetry: targetHost: my-otelcol-collector-headless.otlp.svc targetPort: 4317 type: grpc 7 logs: 8 enable: true metrics: 9 enable: true prefix: netobserv pushTimeInterval: 20s 10 expiryTime: 2m # fieldsMapping: 11 # input: SrcAddr # output: source.address", "oc patch flowcollector cluster --type=json -p \"[{\"op\": \"replace\", \"path\": \"/spec/agent/ebpf/sampling\", \"value\": <new value>}] -n netobserv\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/configuring-network-observability-operators
Chapter 5. Configuring Network Connection Settings
Chapter 5. Configuring Network Connection Settings This chapter describes various configurations of the network connection settings and shows how to configure them by using NetworkManager . 5.1. Configuring 802.3 Link Settings You can configure the 802.3 link settings of an Ethernet connection by modifying the following configuration parameters: 802-3-ethernet.auto-negotiate 802-3-ethernet.speed 802-3-ethernet.duplex You can configure the 802.3 link settings to three main modes: Ignore link negotiation Enforce auto-negotiation activation Manually set the speed and duplex link settings Ignoring link negotiation In this case, NetworkManager ignores link configuration for an ethernet connection, keeping the already configuration on the device. To ignore link negotiation, set the following parameters: Important If the auto-negotiate parameter is set to no , but the speed and duplex values are not set, that does not mean that auto-negotiation is disabled. Enforcing auto-negotiation activation In this case, NetworkManager enforces auto-negotiation on a device. To enforce auto-negotiation activation, set the following options: Manually setting the link speed and duplex In this case, you can manually configure the speed and duplex settings on the link. To manually set the speed and duplex link settings, set the aforementioned parameters as follows: Important Make sure to set both the speed and the duplex values, otherwise NetworkManager does not update the link configuration. As a system administrator, you can configure 802.3 link settings using one of the following options: the nmcli tool the nm-connection-editor utility Configuring 802.3 Link Settings with the nmcli Tool Procedure Create a new ethernet connection for the enp1s0 device. Set the 802.3 link setting to a configuration of your choice. For details, see Section 5.1, "Configuring 802.3 Link Settings" For example, to manually set the speed option 100 Mbit/s and duplex to full : Configuring 802.3 Link Settings with nm-connection-editor Procedure Enter nm-connection-editor in a terminal. Select the ethernet connection you want to edit and click the gear wheel icon to move to the editing dialog. See Section 3.4.3, "Common Configuration Options Using nm-connection-editor" for more information. Select the link negotiation of your choice. Ignore : link configuration is skipped (default). Automatic : link auto-negotiation is enforced on the device. Manual : the Speed and Duplex options can be specified to enforce the link negotiation. Figure 5.1. Configure 802.3 link settings using nm-connection-editor
[ "802-3-ethernet.auto-negotiate = no 802-3-ethernet.speed = 0 802-3-ethernet.duplex = NULL", "802-3-ethernet.auto-negotiate = yes 802-3-ethernet.speed = 0 802-3-ethernet.duplex = NULL", "802-3-ethernet.auto-negotiate = no 802-3-ethernet.speed = [speed in Mbit/s] 802-3-ethernet.duplex = [half |full]", "nmcli connection add con-name MyEthernet type ethernet ifname enp1s0 802-3-ethernet.auto-negotiate no 802-3-ethernet.speed 100 802-3-ethernet.duplex full" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-Configuring_Network_Connection_Settings
1.6. Red Hat JBoss Data Grid Cache Architecture
1.6. Red Hat JBoss Data Grid Cache Architecture Figure 1.1. Red Hat JBoss Data Grid Cache Architecture Red Hat JBoss Data Grid's cache infrastructure depicts the individual elements and their interaction with each other in each JBoss Data Grid Usage Mode (Library and Remote Client-Server). For clarity, each cache architecture diagram is separated into two parts: Elements that a user cannot directly interact with are depicted within a dark grey box in the diagram. In Remote Client-Server mode, this includes Persistent Store, Cache, Cache Manager, L1 Cache, and Server Module. In Library mode, user cannot directly interact with Persistent Store and L1 Cache. Elements that a user can interact directly with are depicted in a light grey box in the diagram. In Remote Client-Server mode, this includes the Application and the Cache Client. In Library mode, users are allowed to interact with the Cache and Cache Manager, as well as the Application. Cache Architecture Elements JBoss Data Grid's cache architecture includes the following elements: The Persistent Store is an optional component. It can permanently store the cached entries for restoration after a data grid shutdown. The Level 1 Cache (or L1 Cache) stores remote cache entries after they are initially accessed, preventing unnecessary remote fetch operations for each subsequent use of the same entries. The Cache Manager controls the life cycle of Cache instances and can store and retrieve them when required. The Cache is the main component for storage and retrieval of the key-value entries. Library and Remote Client-Server Mode Architecture In Library mode, the Application (user code) can interact with the Cache and Cache Manager components directly. In this case the Application resides in the same Java Virtual Machine (JVM) and can call Cache and Cache Manager Java API methods directly. In Remote Client-Server mode, the Application does not directly interact with the cache. Additionally, the Application usually resides in a different JVM, on different physical host, or does not need to be a Java Application. In this case, the Application uses a Cache Client that communicates with a remote JBoss Data Grid Server over the network using one of the supported protocols such as Memcached, Hot Rod, or REST. The appropriate server module handles the communication on the server side. When a request is sent to the server remotely, it translates the protocol back to the concrete operations performed on the cache component to store and retrieve data. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/red_hat_jboss_data_grid_cache_architecture
Chapter 1. Introducing RHEL on public cloud platforms
Chapter 1. Introducing RHEL on public cloud platforms Public cloud platforms provide computing resources as a service. Instead of using on-premises hardware, you can run your IT workloads, including Red Hat Enterprise Linux (RHEL) systems, as public cloud instances. 1.1. Benefits of using RHEL in a public cloud RHEL as a cloud instance located on a public cloud platform has the following benefits over RHEL on-premises physical systems or virtual machines (VMs): Flexible and fine-grained allocation of resources A cloud instance of RHEL runs as a VM on a cloud platform, which typically means a cluster of remote servers maintained by the provider of the cloud service. Therefore, allocating hardware resources to the instance, such as a specific type of CPU or storage, happens on the software level and is easily customizable. In comparison to a local RHEL system, you are also not limited by the capabilities of your physical host. Instead, you can choose from a variety of features, based on selection offered by the cloud provider. Space and cost efficiency You do not need to own any on-premises servers to host your cloud workloads. This avoids the space, power, and maintenance requirements associated with physical hardware. Instead, on public cloud platforms, you pay the cloud provider directly for using a cloud instance. The cost is typically based on the hardware allocated to the instance and the time you spend using it. Therefore, you can optimize your costs based on your requirements. Software-controlled configurations The entire configuration of a cloud instance is saved as data on the cloud platform, and is controlled by software. Therefore, you can easily create, remove, clone, or migrate the instance. A cloud instance is also operated remotely in a cloud provider console and is connected to remote storage by default. In addition, you can back up the current state of a cloud instance as a snapshot at any time. Afterwards, you can load the snapshot to restore the instance to the saved state. Separation from the host and software compatibility Similarly to a local VM, the RHEL guest operating system on a cloud instance runs on a virtualized kernel. This kernel is separate from the host operating system and from the client system that you use to connect to the instance. Therefore, any operating system can be installed on the cloud instance. This means that on a RHEL public cloud instance, you can run RHEL-specific applications that cannot be used on your local operating system. In addition, even if the operating system of the instance becomes unstable or is compromised, your client system is not affected in any way. Additional resources What is public cloud? What is a hyperscaler? Types of cloud computing Public cloud use cases for RHEL Obtaining RHEL for public cloud deployments 1.2. Public cloud use cases for RHEL Deploying on a public cloud provides many benefits, but might not be the most efficient solution in every scenario. If you are evaluating whether to migrate your RHEL deployments to the public cloud, consider whether your use case will benefit from the advantages of the public cloud. Beneficial use cases Deploying public cloud instances is very effective for flexibly increasing and decreasing the active computing power of your deployments, also known as scaling up and scaling down . Therefore, using RHEL on public cloud is recommended in the following scenarios: Clusters with high peak workloads and low general performance requirements. Scaling up and down based on your demands can be highly efficient in terms of resource costs. Quickly setting up or expanding your clusters. This avoids high upfront costs of setting up local servers. Cloud instances are not affected by what happens in your local environment. Therefore, you can use them for backup and disaster recovery. Potentially problematic use cases You are running an existing environment that cannot be adjusted. Customizing a cloud instance to fit the specific needs of an existing deployment may not be cost-effective in comparison with your current host platform. You are operating with a hard limit on your budget. Maintaining your deployment in a local data center typically provides less flexibility but more control over the maximum resource costs than the public cloud does. steps Obtaining RHEL for public cloud deployments Additional resources Should I migrate my application to the cloud? Here's how to decide. 1.3. Frequent concerns when migrating to a public cloud Moving your RHEL workloads from a local environment to a public cloud platform might raise concerns about the changes involved. The following are the most commonly asked questions. Will my RHEL work differently as a cloud instance than as a local virtual machine? In most respects, RHEL instances on a public cloud platform work the same as RHEL virtual machines on a local host, such as an on-premises server. Notable exceptions include: Instead of private orchestration interfaces, public cloud instances use provider-specific console interfaces for managing your cloud resources. Certain features, such as nested virtualization, may not work correctly. If a specific feature is critical for your deployment, check the feature's compatibility in advance with your chosen public cloud provider. Will my data stay safe in a public cloud as opposed to a local server? The data in your RHEL cloud instances is in your ownership, and your public cloud provider does not have any access to it. In addition, major cloud providers support data encryption in transit, which improves the security of data when migrating your virtual machines to the public cloud. The general security of your RHEL public cloud instances is managed as follows: Your public cloud provider is responsible for the security of the cloud hypervisor Red Hat provides the security features of the RHEL guest operating systems in your instances You manage the specific security settings and practices in your cloud infrastructure What effect does my geographic region have on the functionality of RHEL public cloud instances? You can use RHEL instances on a public cloud platform regardless of your geographical location. Therefore, you can run your instances in the same region as your on-premises server. However, hosting your instances in a physically distant region might cause high latency when operating them. In addition, depending on the public cloud provider, certain regions may provide additional features or be more cost-efficient. Before creating your RHEL instances, review the properties of the hosting regions available for your chosen cloud provider. 1.4. Obtaining RHEL for public cloud deployments To deploy a RHEL system in a public cloud environment, you need to: Select the optimal cloud provider for your use case, based on your requirements and the current offer on the market. The cloud providers currently certified for running RHEL instances are: Amazon Web Services (AWS) Google Cloud Platform (GCP) Microsoft Azure Note This document specifically talks about deploying RHEL on GCP. Create a RHEL cloud instance on your chosen cloud platform. For more information, see Methods for creating RHEL cloud instances . To keep your RHEL deployment up-to-date, use Red Hat Update Infrastructure (RHUI). Additional resources RHUI documentation Red Hat Open Hybrid Cloud 1.5. Methods for creating RHEL cloud instances To deploy a RHEL instance on a public cloud platform, you can use one of the following methods: Create a system image of RHEL and import it to the cloud platform. To create the system image, you can use the RHEL image builder or you can build the image manually. This method uses your existing RHEL subscription, and is also referred to as bring your own subscription (BYOS). You pre-pay a yearly subscription, and you can use your Red Hat customer discount. Your customer service is provided by Red Hat. For creating multiple images effectively, you can use the cloud-init tool. Purchase a RHEL instance directly from the cloud provider marketplace. You post-pay an hourly rate for using the service. Therefore, this method is also referred to as pay as you go (PAYG). Your customer service is provided by the cloud platform provider. Note For detailed instructions on using various methods to deploy RHEL instances on Google Cloud Platform, see the following chapters in this document. Additional resources What is a golden image? Configuring and managing cloud-init for RHEL 8
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_google_cloud_platform/introducing-rhel-on-public-cloud-platforms_cloud-content-gcp
Chapter 5. Creating a Puppet config group
Chapter 5. Creating a Puppet config group A Puppet config group is a named list of Puppet classes that allows you to combine their capabilities and assign them to hosts at a click. This is equivalent to the concept of profiles in pure Puppet. Procedure In the Satellite web UI, navigate to Configure > Puppet ENC > Config Groups . Click Create Config Group . Select the classes you want to add to the config group. Choose a meaningful Name for the Puppet config group. Add selected Puppet classes to the Included Classes field. Click Submit to save the changes.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_puppet_integration/creating-a-puppet-config-group_managing-configurations-puppet
probe::socket.create
probe::socket.create Name probe::socket.create - Creation of a socket Synopsis socket.create Values type Socket type value name Name of this probe protocol Protocol value family Protocol family value requester Requested by user process or the kernel (1 = kernel, 0 = user) Context The requester (see requester variable) Description Fires at the beginning of creating a socket.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-socket-create
Chapter 3. Updating GitOps ZTP
Chapter 3. Updating GitOps ZTP You can update the GitOps Zero Touch Provisioning (ZTP) infrastructure independently from the hub cluster, Red Hat Advanced Cluster Management (RHACM), and the managed OpenShift Container Platform clusters. Note You can update the Red Hat OpenShift GitOps Operator when new versions become available. When updating the GitOps ZTP plugin, review the updated files in the reference configuration and ensure that the changes meet your requirements. Important Using PolicyGenTemplate CRs to manage and deploy policies to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation. Additional resources Configuring managed cluster policies by using PolicyGenerator resources Comparing RHACM PolicyGenerator and PolicyGenTemplate resource patching 3.1. Overview of the GitOps ZTP update process You can update GitOps Zero Touch Provisioning (ZTP) for a fully operational hub cluster running an earlier version of the GitOps ZTP infrastructure. The update process avoids impact on managed clusters. Note Any changes to policy settings, including adding recommended content, results in updated policies that must be rolled out to the managed clusters and reconciled. At a high level, the strategy for updating the GitOps ZTP infrastructure is as follows: Label all existing clusters with the ztp-done label. Stop the ArgoCD applications. Install the new GitOps ZTP tools. Update required content and optional changes in the Git repository. Update and restart the application configuration. 3.2. Preparing for the upgrade Use the following procedure to prepare your site for the GitOps Zero Touch Provisioning (ZTP) upgrade. Procedure Get the latest version of the GitOps ZTP container that has the custom resources (CRs) used to configure Red Hat OpenShift GitOps for use with GitOps ZTP. Extract the argocd/deployment directory by using the following commands: USD mkdir -p ./update USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 extract /home/ztp --tar | tar x -C ./update The /update directory contains the following subdirectories: update/extra-manifest : contains the source CR files that the SiteConfig CR uses to generate the extra manifest configMap . update/source-crs : contains the source CR files that the PolicyGenerator or PolicyGentemplate CR uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. update/argocd/deployment : contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. update/argocd/example : contains example SiteConfig and PolicyGenerator or PolicyGentemplate files that represent the recommended configuration. Update the clusters-app.yaml and policies-app.yaml files to reflect the name of your applications and the URL, branch, and path for your Git repository. If the upgrade includes changes that results in obsolete policies, the obsolete policies should be removed prior to performing the upgrade. Diff the changes between the configuration and deployment source CRs in the /update folder and Git repo where you manage your fleet site CRs. Apply and push the required changes to your site repository. Important When you update GitOps ZTP to the latest version, you must apply the changes from the update/argocd/deployment directory to your site repository. Do not use older versions of the argocd/deployment/ files. 3.3. Labeling the existing clusters To ensure that existing clusters remain untouched by the tool updates, label all existing managed clusters with the ztp-done label. Note This procedure only applies when updating clusters that were not provisioned with Topology Aware Lifecycle Manager (TALM). Clusters that you provision with TALM are automatically labeled with ztp-done . Procedure Find a label selector that lists the managed clusters that were deployed with GitOps Zero Touch Provisioning (ZTP), such as local-cluster!=true : USD oc get managedcluster -l 'local-cluster!=true' Ensure that the resulting list contains all the managed clusters that were deployed with GitOps ZTP, and then use that selector to add the ztp-done label: USD oc label managedcluster -l 'local-cluster!=true' ztp-done= 3.4. Stopping the existing GitOps ZTP applications Removing the existing applications ensures that any changes to existing content in the Git repository are not rolled out until the new version of the tools is available. Use the application files from the deployment directory. If you used custom names for the applications, update the names in these files first. Procedure Perform a non-cascaded delete on the clusters application to leave all generated resources in place: USD oc delete -f update/argocd/deployment/clusters-app.yaml Perform a cascaded delete on the policies application to remove all policies: USD oc patch -f policies-app.yaml -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge USD oc delete -f update/argocd/deployment/policies-app.yaml 3.5. Required changes to the Git repository When upgrading the ztp-site-generate container from an earlier release of GitOps Zero Touch Provisioning (ZTP) to 4.10 or later, there are additional requirements for the contents of the Git repository. Existing content in the repository must be updated to reflect these changes. Note The following procedure assumes you are using PolicyGenerator resources instead of PolicyGentemplate resources for cluster policies management. Make required changes to PolicyGenerator files: All PolicyGenerator files must be created in a Namespace prefixed with ztp . This ensures that the GitOps ZTP application is able to manage the policy CRs generated by GitOps ZTP without conflicting with the way Red Hat Advanced Cluster Management (RHACM) manages the policies internally. Add the kustomization.yaml file to the repository: All SiteConfig and PolicyGenerator CRs must be included in a kustomization.yaml file under their respective directory trees. For example: ├── acmpolicygenerator │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml Note The files listed in the generator sections must contain either SiteConfig or {policy-gen-cr} CRs only. If your existing YAML files contain other CRs, for example, Namespace , these other CRs must be pulled out into separate files and listed in the resources section. The PolicyGenerator kustomization file must contain all PolicyGenerator YAML files in the generator section and Namespace CRs in the resources section. For example: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - acm-common-ranGen.yaml - acm-group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - acm-group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml The SiteConfig kustomization file must contain all SiteConfig YAML files in the generator section and any other CRs in the resources: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml Remove the pre-sync.yaml and post-sync.yaml files. In OpenShift Container Platform 4.10 and later, the pre-sync.yaml and post-sync.yaml files are no longer required. The update/deployment/kustomization.yaml CR manages the policies deployment on the hub cluster. Note There is a set of pre-sync.yaml and post-sync.yaml files under both the SiteConfig and {policy-gen-cr} trees. Review and incorporate recommended changes Each release may include additional recommended changes to the configuration applied to deployed clusters. Typically these changes result in lower CPU use by the OpenShift platform, additional features, or improved tuning of the platform. Review the reference SiteConfig and PolicyGenerator CRs applicable to the types of cluster in your network. These examples can be found in the argocd/example directory extracted from the GitOps ZTP container. 3.6. Installing the new GitOps ZTP applications Using the extracted argocd/deployment directory, and after ensuring that the applications point to your site Git repository, apply the full contents of the deployment directory. Applying the full contents of the directory ensures that all necessary resources for the applications are correctly configured. Procedure To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the out/argocd/deployment/ directory for your environment. Select the multicluster-operators-subscription image that matches your RHACM version. For RHACM 2.8 and 2.9, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v<rhacm_version> image. For RHACM 2.10 and later, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<rhacm_version> image. Important The version of the multicluster-operators-subscription image must match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image for multicluster-operators-subscription images. Click [Expand for Operator list] in the "Platform Aligned Operators" table in OpenShift Operator Life Cycles to view the complete supported Operators matrix for OpenShift Container Platform. Modify the out/argocd/deployment/argocd-openshift-gitops-patch.json file with the multicluster-operators-subscription image that matches your RHACM version: { "args": [ "-c", "mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" 1 ], "command": [ "/bin/bash" ], "image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10", 2 3 "name": "policy-generator-install", "imagePullPolicy": "Always", "volumeMounts": [ { "mountPath": "/.config", "name": "kustomize" } ] } 1 Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version. 2 Match the multicluster-operators-subscription image to the RHACM version. 3 In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment. Patch the ArgoCD instance. Run the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. Apply the following patch to disable the cluster-proxy-addon feature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command: USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json Apply the pipeline configuration to your hub cluster by running the following command: USD oc apply -k out/argocd/deployment 3.7. Rolling out the GitOps ZTP configuration changes If any configuration changes were included in the upgrade due to implementing recommended changes, the upgrade process results in a set of policy CRs on the hub cluster in the Non-Compliant state. With the GitOps Zero Touch Provisioning (ZTP) version 4.10 and later ztp-site-generate container, these policies are set to inform mode and are not pushed to the managed clusters without an additional step by the user. This ensures that potentially disruptive changes to the clusters can be managed in terms of when the changes are made, for example, during a maintenance window, and how many clusters are updated concurrently. To roll out the changes, create one or more ClusterGroupUpgrade CRs as detailed in the TALM documentation. The CR must contain the list of Non-Compliant policies that you want to push out to the managed clusters as well as a list or selector of which clusters should be included in the update. Additional resources For information about the Topology Aware Lifecycle Manager (TALM), see About the Topology Aware Lifecycle Manager configuration . For information about creating ClusterGroupUpgrade CRs, see About the auto-created ClusterGroupUpgrade CR for GitOps ZTP .
[ "mkdir -p ./update", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 extract /home/ztp --tar | tar x -C ./update", "oc get managedcluster -l 'local-cluster!=true'", "oc label managedcluster -l 'local-cluster!=true' ztp-done=", "oc delete -f update/argocd/deployment/clusters-app.yaml", "oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge", "oc delete -f update/argocd/deployment/policies-app.yaml", "├── acmpolicygenerator │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - acm-common-ranGen.yaml - acm-group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - acm-group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml", "{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json", "oc apply -k out/argocd/deployment" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/edge_computing/ztp-updating-gitops
Chapter 3. Installing applications in GNOME
Chapter 3. Installing applications in GNOME This section describes various approaches that you can use to install a new application in GNOME 3. Prerequisites Administrator priveleges. 3.1. The GNOME Software application GNOME Software is an utility that enables you to install and update applications, software components, and GNOME Shell extensions in a graphical interface. GNOME Software provides a catalog of graphical applications, which are the applications that include a *.desktop file. The available applications are grouped into multiple categories according to their purpose. GNOME Software uses the PackageKit and Flatpak technologies as its back ends. 3.2. Installing an application using GNOME Software This procedure installs a graphical application using the GNOME Software installer. Procedure Launch the GNOME Software application. Find the application to be installed in the available categories: Audio & Video Communication & News Productivity Graphics & Photography Add-ons Add-ons include for example GNOME Shell extensions, codecs, or fonts. Developer Tools Utilities Click the selected application. Click the Install button. 3.3. Installing an application to open a file type This procedure installs an application that can open a given file type. Prerequisites You can access a file of the required file type in your file system. Procedure Try opening a file that is associated with an application that is currently not installed on your system. GNOME automatically identifies the suitable application that can open the file, and offers to download the application. 3.4. Installing an RPM package in GNOME This procedure installs an RPM software package that you manually downloaded as a file. Procedure Download the required RPM package. In the Files application, open the directory that stores the downloaded RPM package. Note By default, downloaded files are stored in the /home/ user /Downloads/ directory. Double-click the icon of the RPM package to install it. 3.5. Installing an application from the application search in GNOME This procedure installs a graphical application that you find in the GNOME application search. Procedure Open the Activities Overview screen. Start typing the name of the required application in the search entry. GNOME automatically finds the application in a repository, and displays the application's icon. Click the application's icon to open GNOME Software . Click the icon of the application again. Click Install to finish the installation in GNOME Software . 3.6. Additional resources For installing software on the command line, see Installing software with yum .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/installing-applications-in-gnome_using-the-desktop-environment-in-rhel-8
Chapter 7. Configuring the systems and running tests using RHCert CLI Tool
Chapter 7. Configuring the systems and running tests using RHCert CLI Tool To complete the certification process using CLI, you must prepare the host under test (HUT) and test server, run the tests, and retrieve the test results. 7.1. Using the test plan to prepare the host under test for testing Running the provision command performs a number of operations, such as setting up passwordless SSH communication with the test server, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware or software packages will be installed if the test plan is designed for certifying a hardware or a software product. Prerequisites You have the hostname or the IP address of the test server. Procedure Run the provision command in either way. The test plan will automatically get downloaded to your system. If you have already downloaded the test plan: Replace <path_to_test_plan_document> with the test plan file saved on your system. Follow the on-screen instructions. If you have not downloaded the test plan: Follow the on-screen instructions and enter your Certification ID when prompted. When prompted, provide the hostname or the IP address of the test server to set up passwordless SSH. You are prompted only the first time you add a new system. 7.2. Using the test plan to prepare the test server for testing Running the Provision command enables and starts the rhcertd service, which configures services specified in the test suite on the test server, such as iperf for network testing, and an nfs mount point used in kdump testing. Prerequisites You have the hostname or IP address of the host under test. Procedure Run the provision command by defining the role, "test server", to the system you are adding. This is required only for provisioning the test server. Replace <path_to_test_plan_document> with the test plan file saved on your system. 7.3. Running the certification tests using CLI Procedure Run the following command: When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . Note After a test reboot, rhcert is running in the background to verify the image. Use tail -f / var /log/rhcert/RedHatCertDaemon.log to see the current progress and status of the verification. 7.4. Submitting the test results file Procedure Log in to authenticate your device. Note Logging in is mandatory to submit the test results file. Open the generated URL in a new browser window or tab. Enter the login and password and click Log in . Click Grant access . Device log in successful message displays. Return to the terminal and enter yes to the Please confirm once you grant access prompt. Submit the result file. When prompted, enter your Certification ID.
[ "rhcert-provision <path_to_test_plan_document>", "rhcert-provision", "rhcert-provision --role test-server <path_to_test_plan_document>", "rhcert-run", "rhcert-cli login", "rhcert-submit" ]
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_test_suite_user_guide/assembly_configuring-the-hosts-and-running-tests-by-using-CLI_hw-test-suite-configure-hosts-run-tests-use-Cockpit
Chapter 6. Ceph Object Storage Daemon (OSD) configuration
Chapter 6. Ceph Object Storage Daemon (OSD) configuration As a storage administrator, you can configure the Ceph Object Storage Daemon (OSD) to be redundant and optimized based on the intended workload. Prerequisites Installation of the Red Hat Ceph Storage software. 6.1. Ceph OSD configuration All Ceph clusters have a configuration, which defines: Cluster identity Authentication settings Ceph daemon membership in the cluster Network configuration Host names and addresses Paths to keyrings Paths to OSD log files Other runtime options A deployment tool, such as cephadm , will typically create an initial Ceph configuration file for you. However, you can create one yourself if you prefer to bootstrap a cluster without using a deployment tool. For your convenience, each daemon has a series of default values. Many are set by the ceph/src/common/config_opts.h script. You can override these settings with a Ceph configuration file or at runtime by using the monitor tell command or connecting directly to a daemon socket on a Ceph node. Important Red Hat does not recommend changing the default paths, as it makes it more difficult to troubleshoot Ceph later. Additional Resources For more information about cephadm and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide . 6.2. Scrubbing the OSD In addition to making multiple copies of objects, Ceph ensures data integrity by scrubbing placement groups. Ceph scrubbing is analogous to the fsck command on the object storage layer. For each placement group, Ceph generates a catalog of all objects and compares each primary object and its replicas to ensure that no objects are missing or mismatched. Light scrubbing (daily) checks the object size and attributes. Deep scrubbing (weekly) reads the data and uses checksums to ensure data integrity. Scrubbing is important for maintaining data integrity, but it can reduce performance. Adjust the following settings to increase or decrease scrubbing operations. Additional resources See Ceph scrubbing options in the appendix of the Red Hat Ceph Storage Configuration Guide for more details. 6.3. Backfilling an OSD When you add Ceph OSDs to a cluster or remove them from the cluster, the CRUSH algorithm rebalances the cluster by moving placement groups to or from Ceph OSDs to restore the balance. The process of migrating placement groups and the objects they contain can reduce the cluster operational performance considerably. To maintain operational performance, Ceph performs this migration with the 'backfill' process, which allows Ceph to set backfill operations to a lower priority than requests to read or write data. 6.4. OSD recovery When the cluster starts or when a Ceph OSD terminates unexpectedly and restarts, the OSD begins peering with other Ceph OSDs before a write operation can occur. If a Ceph OSD crashes and comes back online, usually it will be out of sync with other Ceph OSDs containing more recent versions of objects in the placement groups. When this happens, the Ceph OSD goes into recovery mode and seeks to get the latest copy of the data and bring its map back up to date. Depending upon how long the Ceph OSD was down, the OSD's objects and placement groups may be significantly out of date. Also, if a failure domain went down, for example, a rack, more than one Ceph OSD might come back online at the same time. This can make the recovery process time consuming and resource intensive. To maintain operational performance, Ceph performs recovery with limitations on the number of recovery requests, threads, and object chunk sizes which allows Ceph to perform well in a degraded state. Additional resources See all the Red Hat Ceph Storage Ceph OSD configuration options in OSD object daemon storage configuration options for specific option descriptions and usage.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/ceph-object-storage-daemon-configuration
Upgrading OpenShift AI Self-Managed
Upgrading OpenShift AI Self-Managed Red Hat OpenShift AI Self-Managed 2.18 Upgrade OpenShift AI on OpenShift
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/upgrading_openshift_ai_self-managed/index
9.6. Analyzing and Monitoring Network Connectivity
9.6. Analyzing and Monitoring Network Connectivity 9.6.1. Introducing Skydive Skydive can be used to monitor logical networks, including Open Virtual Networks (OVN) that have been defined as an External Network Provider . Skydive provides a live view of your network topology, dependencies, and flows, generates reports, and performs configuration audits. You can use the data presented by Skydive to: Detect packet loss Check that your deployment is working correctly, by capturing a cluster's network topology, including bridges and interfaces Review whether the expected MTU settings are correctly applied Capture network traffic between virtual machines or between virtual machines and hosts For more information about Skydive's feature set, see http://skydive.network . Note Skydive is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ . 9.6.2. Installing Skydive Procedure Install skydive-ansible on the Manager machine: Copy /usr/share/ovirt-engine/playbooks/install-skydive.inventory.sample to the current directory and rename it to inventory . Modify the inventory/01_hosts file as follows (see below for full contents): Update skydive_os_auth_url with the Manager's FQDN. This is used by the OVN, which uses the same FQDN as the Manager. Update ovn_provider_username with the username used for the OVN provider. The default is defined in /etc/ovirt-provider-ovn/ovirt-provider-ovn.conf . Update ovn_provider_password . Under [agents:children] <host_group> define the hosts, clusters, or data center on which you are installing the Skydive agents. You can view a list of valid groups by running: Note There is no need to list each host explicitly. To install the agent on all hosts in the cluster, add ovirt_cluster_Default . Alternatively, to install the agent on all hosts in the data center, add ovirt_datacenter_Default . Sample Inventory File Run the playbook: Verify that Skydive recognizes the virtual machine's port by going to http:// MANAGERS_FQDN :8082, selecting a virtual machine, and checking the following fields in the Metadata section of the Capture tab: Manager: Neutron NetworkName: network_name IPV4: IP_address , if a subnet is used See Section 9.6.3, "Using Skydive to Test Network Connection" to view an example of how you can use Skydive to capture your network's activity. 9.6.3. Using Skydive to Test Network Connection This example tests the connection between two hosts that have NICs with IPv4 addresses. The NICs are connected to a logical network that is tagged as VLAN 4. For information on assigning an IP address to a logical network, see Section 9.4.2, "Editing Host Network Interfaces and Assigning Logical Networks to Hosts" . Procedure Install Skydive. Open Skydive from http:// MANAGERS_FQDN :8082 . Select network_4 on rhv-host1 in the network map. Click Create in the Capture tab and click Start . Repeat the steps for network_4 on rhv-host0 . Click the Generate tab. Select eth0 on rhv-host0 as the Source and eth0 on rhv-host1 as the Destination . Select ICMPv4/Echo Request from the Type drop-down list. Click Inject to inject a packet. Open the Flows tab. The results of the ping are displayed in a table. If the ping was successful, a row containing ICMPv4 and the source and destination IP addresses is displayed. When you move your cursor over that row, network_4 is highlighted with a yellow circle on the network map. For more information on using Skydive, see the Skydive documentation . For installation is :Testing!:
[ "yum --disablerepo=\"*\" --enablerepo=\"rhel-7-server-rpms,rhel-7-server-extras-rpms,rhel-7-server-rh-common-rpms,rhel-7-server-openstack-14-rpms\" install skydive-ansible", "/usr/share/ovirt-engine-metrics/bin/ovirt-engine-hosts-ansible-inventory | python -m json.tool", "[agents] [analyzers] [skydive:children] analyzers agents [skydive:vars] skydive_listen_ip=0.0.0.0 skydive_deployment_mode=package skydive_extra_config={'agent.topology.probes': ['ovsdb', 'neutron'], 'agent.topology.neutron.ssl_insecure': true} skydive_fabric_default_interface=ovirtmgmt skydive_os_auth_url=https:// MANAGERS_FQDN :35357/v2.0 skydive_os_service_username= ovn_provider_username skydive_os_service_password= ovn_provider_password skydive_os_service_tenant_name=service skydive_os_service_domain_name=Default skydive_os_service_region_name=RegionOne [agents:vars] ansible_ssh_private_key_file=/etc/pki/ovirt-engine/keys/engine_id_rsa [agents:children] host_group [analyzers] localhost ansible_connection=local", "ansible-playbook -i inventory /usr/share/ovirt-engine/playbooks/install-skydive.yml /usr/share/skydive-ansible/playbook.yml.sample" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Testing
Chapter 7. Security
Chapter 7. Security 7.1. Securing connections with SSL/TLS Red Hat build of Rhea uses SSL/TLS to encrypt communication between clients and servers. To connect to a remote server with SSL/TLS, set the transport connection option to tls . Example: Enabling SSL/TLS var opts = { host: "example.com", port: 5671, transport: "tls" }; container.connect(opts); Note By default, the client will reject connections to servers with untrusted certificates. This is sometimes the case in test environments. To bypass certificate authorization, set the rejectUnauthorized connection option to false . Be aware that this compromises the security of your connection. 7.2. Connecting with a user and password Red Hat build of Rhea can authenticate connections with a user and password. To specify the credentials used for authentication, set the username and password connection options. Example: Connecting with a user and password var opts = { host: "example.com", username: "alice" , password: "secret" }; container.connect(opts); 7.3. Configuring SASL authentication Red Hat build of Rhea uses the SASL protocol to perform authentication. SASL can use a number of different authentication mechanisms . When two network peers connect, they exchange their allowed mechanisms, and the strongest mechanism allowed by both is selected. Red Hat build of Rhea enables SASL mechanisms based on the presence of user and password information. If the user and password are both specified, PLAIN is used. If only a user is specified, ANONYMOUS is used. If neither is specified, SASL is disabled.
[ "var opts = { host: \"example.com\", port: 5671, transport: \"tls\" }; container.connect(opts);", "var opts = { host: \"example.com\", username: \"alice\" , password: \"secret\" }; container.connect(opts);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_rhea/3.0/html/using_rhea/security
Chapter 1. RHACS Cloud Service service description
Chapter 1. RHACS Cloud Service service description 1.1. Introduction to RHACS Red Hat Advanced Cluster Security for Kubernetes (RHACS) is an enterprise-ready, Kubernetes-native container security solution that helps you build, deploy, and run cloud-native applications more securely. Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides Kubernetes-native security as a service. With RHACS Cloud Service, Red Hat maintains, upgrades, and manages your Central services. Central services include the user interface (UI), data storage, RHACS application programming interface (API), and image scanning capabilities. You deploy your Central service through the Red Hat Hybrid Cloud Console. When you create a new ACS instance, Red Hat creates your individual control plane for RHACS. RHACS Cloud Service allows you to secure self-managed clusters that communicate with a Central instance. The clusters you secure, called Secured Clusters, are managed by you, and not by Red Hat. Secured Cluster services include optional vulnerability scanning services, admission control services, and data collection services used for runtime monitoring and compliance. You install Secured Cluster services on any OpenShift or Kubernetes cluster you want to secure. 1.2. Architecture RHACS Cloud Service is hosted on Amazon Web Services (AWS) over two regions, eu-west-1 and us-east-1, and uses the network access points provided by the cloud provider. Each tenant from RHACS Cloud Service uses highly-available egress proxies and is spread over 3 availability zones. For more information about RHACS Cloud Service system architecture and components, see Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) architecture . 1.3. Billing Customers can purchase a RHACS Cloud Service subscription on the Amazon Web Services (AWS) marketplace. The service cost is charged hourly per secured core, or vCPU of a node belonging to a secured cluster. Example 1.1. Subscription cost example If you have established a connection to two secured clusters, each with 5 identical nodes with 8 vCPUs (such as Amazon EC2 m7g.2xlarge), the total number of secured cores is 80 (2 x 5 x 8 = 80). 1.4. Security and compliance All RHACS Cloud Service data in the Central instance is encrypted in transit and at rest. The data is stored in secure storage with full replication and high availability together with regularly-scheduled backups. RHACS Cloud Service is available through cloud data centers that ensure optimal performance and the ability to meet data residency requirements. 1.4.1. Information security guidelines, roles, and responsibilities Red Hat's information security guidelines, aligned with the NIST Cybersecurity Framework , are approved by executive management. Red Hat maintains a dedicated team of globally-distributed certified information security professionals. See the following resources: FIRST: RH-ISIRT team TF-CSIRT: RH-ISIRT team Red Hat has strict internal policies and practices to protect our customers and their businesses. These policies and practices are confidential. In addition, we comply with all applicable laws and regulations, including those related to data privacy. Red Hat's information security roles and responsibilities are not managed by third parties. Red Hat maintains an ISO 27001 certification for our corporate information security management system (ISMS), which governs how all of our people work, corporate endpoint devices, and authentication and authorization practices. We have taken a standardized approach to this through the implementation of the Red Hat Enterprise Security Standard (ESS) to all infrastructure, products, services and technology that Red Hat employs. A copy of the ESS is available upon request. RHACS Cloud Service runs on an instance of OpenShift Dedicated hosted on Amazon Web Services (AWS). OpenShift Dedicated is compliant with ISO 27001, ISO 27017, ISO 27018, PCI DSS, SOC 2 Type 2, and HIPAA. Strong processes and security controls are aligned with industry standards to manage information security. RHACS Cloud Service follows the same security principles, guidelines, processes and controls defined for OpenShift Dedicated. These certifications demonstrate how our services platform, associated operations, and management practices align with core security requirements. We meet many of these requirements by following solid Secure Software Development Framework (SSDF) practices as defined by NIST, including build pipeline security. Implementation of SSDF controls are implemented via our Secure Software Management Lifecycle (SSML) for all products and services. Red Hat's proven and experienced global site reliability engineering (SRE) team is available 24x7 and proactively manages the cluster life cycle, infrastructure configuration, scaling, maintenance, security patching, and incident response as it relates to the hosted components of RHACS Cloud Service. The Red Hat SRE team is responsible for managing HA, uptime, backups, restore, and security for the RHACS Cloud Service control plane. RHACS Cloud Service comes with a 99.95% availability SLA and 24x7 RH SRE support by phone or chat. You are responsible for use of the product, including implementation of policies, vulnerability management, and deployment of secured cluster components within your OpenShift Container Platform environments. The Red Hat SRE team manages the control plane that contains tenant data in line with the compliance frameworks noted previously, including: All Red Hat SRE access the data plane clusters through the backplane which enables audited access to the cluster Red Hat SRE only deploys images from the Red Hat registry. All content posted to the Red Hat registry goes through rigorous checks. These images are the same images available to self-managed customers. Each tenant has their own individual mTLS CA, which encrypts data in-transit, enabling multi-tenant isolation. Additional isolation is provided via SELinux controls namespaces and network policies. Each tenant has their own instance of the RDS database. All Red Hat SREs and developers go through rigorous Secure Development Lifecycle training. For more information, see the following resources: Red Hat Site Reliability Engineering (SRE) services Red Hat OpenShift Dedicated An Overview of Red Hat's Secure Development Lifecycle (SDL) practices 1.4.2. Vulnerability management program Red Hat scans for vulnerabilities in our products during the build process and our dedicated Product Security team tracks and assesses newly-discovered vulnerabilities. Red Hat Information Security regularly scans running environments for vulnerabilities. Qualified critical and important Security Advisories (RHSAs) and urgent and selected high priority Bug Fix Advisories (RHBAs) are released as they become available. All other available fix and qualified patches are released via periodic updates. All RHACS Cloud Service software impacted by critical or important severity flaws are updated as soon as the fix is available. For more information about remediation of critical or high-priority issues, see Understanding Red Hat's Product Security Incident Response Plan . 1.4.3. Security exams and audits RHACS Cloud Service does not currently hold any external security certifications or attestations. The Red Hat Information Risk and Security Team has achieved ISO 27001:2013 certification for our Information Security Management System (ISMS). 1.4.4. Systems interoperability security RHACS Cloud Service supports integrations with registries, CI systems, notification systems, workflow systems like ServiceNow and Jira, and Security information and event management (SIEM) platforms. For more information about supported integrations, see the Integrating documentation. Custom integrations can be implemented using the API or generic webhooks. RHACS Cloud Service uses certificate-based architecture (mTLS) for both authentication and end-to-end encryption of all inflight traffic between the customer's site and Red Hat. It does not require a VPN. IP allowlists are not supported. Data transfer is encrypted using mTLS. File transfer, including Secure FTP, is not supported. 1.4.5. Malicious code prevention RHACS Cloud Service is deployed on Red Hat Enterprise Linux CoreOS (RHCOS). The user space in RHCOS is read-only. In addition, all RHACS Cloud Service instances are monitored in runtime by RHACS. Red Hat uses a commercially-available, enterprise-grade anti-virus solution for Windows and Mac platforms, which is centrally managed and logged. Anti-virus solutions on Linux-based platforms are not part of Red Hat's strategy, as they can introduce additional vulnerabilities. Instead, we harden and rely on the built-in tooling (for example, SELinux) to protect the platform. Red Hat uses SentinelOne and osquery for individual endpoint security, with updates made as they are available from the vendor. All third-party JavaScript libraries are downloaded and included in build images which are scanned for vulnerabilities before being published. 1.4.6. Systems development lifecycle security Red Hat follows secure development lifecycle practices. Red Hat Product Security practices are aligned with the Open Web Application Security Project (OWASP) and ISO12207:2017 wherever it is feasible. Red Hat covers OWASP project recommendations along with other secure software development practices to increase the general security posture of our products. OWASP project analysis is included in Red Hat's automated scanning, security testing, and threat models, as the OWASP project is built based on selected CWE weaknesses. Red Hat monitors weaknesses in our products to address issues before they are exploited and become vulnerabilities. For more information, see the following resources: Red Hat Software Development Life Cycle practices Security by design: Security principles and threat modeling Applications are scanned regularly and the container scan results of the product are available publicly. For example, on the Red Hat Ecosystem Catalog site, you can select a component image such as rhacs-main and click the Security tab to see the health index and the status of security updates. As part of Red Hat's policy, a support policy and maintenance plan is issued for any third-party components we depend on that go to end-of-life. 1.4.7. Software Bill of Materials Red Hat has published software bill of materials (SBOMs) files for core Red Hat offerings. An SBOM is a machine-readable, comprehensive inventory (manifest) of software components and dependencies with license and provenance information. SBOM files help establish reviews for procurement and audits of what is in a set of software applications and libraries. Combined with Vulnerability Exploitability eXchange (VEX), SBOMs help an organization address its vulnerability risk assessment process. Together they provide information on where a potential risk might exist (where the vulnerable artifact is included and the correlation between this artifact and components or the product), and its current status to known vulnerabilities or exploits. Red Hat, together with other vendors, is working to define the specific requirements for publishing useful SBOMs that can be correlated with Common Security Advisory Framework (CSAF)-VEX files, and inform consumers and partners about how to use this data. For now, SBOM files published by Red Hat, including SBOMs for RHACS Cloud Service, are considered to be beta versions for customer testing and are available at https://access.redhat.com/security/data/sbom/beta/spdx/ . For more detail on Red Hat's Security data, see The future of Red Hat security data . 1.4.8. Data centers and providers The following third-party providers are used by Red Hat in providing subscription support services: Flexential hosts the Raleigh Data Center, which is the primary data center used to support the Red Hat Customer Portal databases. Digital Realty hosts the Phoenix Data Center, which is the secondary backup data center supporting the Red Hat Customer Portal databases. Salesforce provides the engine behind the customer ticketing system. AWS is used to augment data center infrastructure capacity, some of which is used to support the Red Hat Customer Portal application. Akamai is used to host the Web Application Firewall and provide DDoS protection. Iron Mountain is used to handle the destruction of sensitive material. 1.5. Access control User accounts are managed with role-based access control (RBAC). See Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes for more information. Red Hat site reliability engineers (SREs) have access to Central instances. Access is controlled with OpenShift RBAC. Credentials are instantly revoked upon termination. 1.5.1. Authentication provider When you create a Central instance using Red Hat Hybrid Cloud Console , authentication for the cluster administrator is configured as part of the process. Customers must manage all access to the Central instance as part of their integrated solution. For more information about the available authentication methods, see Understanding authentication providers . The default identity provider in RHACS Cloud Service is Red Hat Single Sign-On (SSO). Authorization rules are set up to provide administrator access to the user who created the RHACS Cloud Service and to users who are marked as organization administrators in Red Hat SSO. The admin login is disabled for RHACS Cloud Service by default and can only be enabled temporarily by SREs. For more information about authentication using Red Hat SSO, see Default access to the ACS Console . 1.5.2. Password management Red Hat's password policy requires the use of a complex password. Passwords must contain at least 14 characters and at least three of the following character classes: Base 10 digits (0 to 9) Upper case characters (A to Z) Lower case characters (a to z) Punctuation, spaces, and other characters Most systems require two-factor authentication. Red Hat follows best password practices according to NIST guidelines . 1.5.3. Remote access Access for remote support and troubleshooting is strictly controlled through implementation of the following guidelines: Strong two-factor authentication for VPN access A segregated network with management and administrative networks requiring additional authentication through a bastion host All access and management is performed over encrypted sessions Our customer support team offers Bomgar as a remote access solution for troubleshooting. Bomgar sessions are optional, must be initiated by the customer, and can be monitored and controlled. To prevent information leakage, logs are shipped to SRE through our security information and event management (SIEM) application, Splunk. 1.5.4. Regulatory compliance For the latest regulatory compliance information, see Understanding process and security for OpenShift Dedicated . 1.6. Data protection Red Hat provides data protection by using various methods, such as logging, access control, and encryption. 1.6.1. Data storage media protection To protect our data and client data from risk of theft or destruction, Red Hat employs the following methods: access logging automated account termination procedures application of the principle of least privilege Data is encrypted in transit and at rest using strong data encryption following NIST guidelines and Federal Information Processing Standards (FIPS) where possible and practical. This includes backup systems. RHACS Cloud Service encrypts data at rest within the Amazon Relational Database Service (RDS) database by using AWS-managed Key Management Services (KMS) keys. All data between the application and the database, together with data exchange between the systems, are encrypted in transit. 1.6.1.1. Data retention and destruction Records, including those containing personal data, are retained as required by law. Records not required by law or a reasonable business need are securely removed. Secure data destruction requirements are included in operating procedures, using military grade tools. In addition, staff have access to secure document destruction facilities. 1.6.1.2. Encryption Red Hat uses AWS managed keys which are rotated by AWS each year. For information on the use of keys, see AWS KMS key management . For more information about RDS, see Amazon RDS Security . 1.6.1.3. Multi-tenancy RHACS Cloud Service isolates tenants by namespace on OpenShift Container Platform. SELinux provides additional isolation. Each customer has a unique RDS instance. 1.6.1.4. Data ownership Customer data is stored in an encrypted RDS database not available on the public internet. Only Site Reliability Engineers (SREs) have access to it, and the access is audited. Every RHACS Cloud Service system comes integrated with Red Hat external SSO. Authorization rules are set up to provide administrator access to the user created the Cloud Service and to users who are marked as organization administrators in Red Hat SSO. The admin login is disabled for RHACS Cloud Service by default and can only be temporarily enabled by SREs. Red Hat collects information about the number of secured clusters connected to RHACS Cloud Service and the usage of features. Metadata generated by the application and stored in the RDS database is owned by the customer. Red Hat only accesses data for troubleshooting purposes and with customer permission. Red Hat access requires audited privilege escalation. Upon contract termination, Red Hat can perform a secure disk wipe upon request. However, we are unable to physically destroy media (cloud providers such as AWS do not provide this option). To secure data in case of a breach, you can perform the following actions: Disconnect all secured clusters from RHACS Cloud Service immediately using the cluster management page. Immediately disable access to the RHACS Cloud Service by using the Access Control page. Immediately delete your RHACS instance, which also deletes the RDS instance. Any AWS RDS (data store) specific access modifications would be implemented by the RHACS Cloud Service SRE engineers. 1.7. Metrics and Logging 1.7.1. Service metrics Service metrics are internal only. Red Hat provides and maintains the service at the agreed upon level. Service metrics are accessible only to authorized Red Hat personnel. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . 1.7.2. Customer metrics Core usage capacity metrics are available either through Subscription Watch or the Subscriptions page . 1.7.3. Service logging System logs for all components of the Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) are internal and available only to Red Hat personnel. Red Hat does not provide user access to component logs. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . 1.8. Updates and Upgrades Red Hat makes a commercially reasonable effort to notify customers prior to updates and upgrades that impact service. The decision regarding the need for a Service update to the Central instance and its timing is the sole responsibility of Red Hat. Customers have no control over when a Central service update occurs. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . Upgrades to the version of Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) are considered part of the service update. Upgrades are transparent to the customer and no connection to any update site is required. Customers are responsible for timely RHACS Secured Cluster services upgrades that are required to maintain compatibility with RHACS Cloud Service. Red Hat recommends enabling automatic upgrades for Secured Clusters that are connected to RHACS Cloud Service. See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information about upgrade versions. 1.9. Availability Availability and disaster avoidance are extremely important aspects of any security platform. Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides numerous protections against failures at multiple levels. To account for possible cloud provider failures, Red Hat established multiple availability zones. 1.9.1. Backup and disaster recovery The RHACS Cloud Service Disaster Recovery strategy includes backups of database and any customization. This also applies to customer data stored in the Central database. Recovery time varies based on the number of appliances and database sizes; however, because the appliances can be clustered and distributed, the RTO can be reduced upfront with proper architecture planning. All snapshots are created using the appropriate cloud provider snapshot APIs, encrypted and then uploaded to secure object storage, which for Amazon Web Services (AWS) is an S3 bucket. Red Hat does not commit to a Recovery Point Objective (RPO) or Recovery Time Objective (RTO). For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . Site Reliability Engineering performs backups only as a precautionary measure. They are stored in the same region as the cluster. Customers should deploy multiple availability zone Secured Clusters with workloads that follow Kubernetes best practices to ensure high availability within a region. Disaster recovery plans are exercised annually at a minimum. A Business Continuity Management standard and guideline is in place so that the BC lifecycle is consistently followed throughout the organization. This policy includes a requirement for testing at least annually, or with major change of functional plans. Review sessions are required to be conducted after any plan exercise or activation, and plan updates are made as needed. Red Hat has generator backup systems. Our IT production systems are hosted in a Tier 3 data center facility that has recurring testing to ensure redundancy is operational. They are audited yearly to validate compliance. 1.10. Getting support for RHACS Cloud Service If you experience difficulty with a procedure described in this documentation, or with RHACS Cloud Service in general, visit the Red Hat Customer Portal . From the Customer Portal, you can perform the following actions: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in RHACS Cloud Service. Insights provides details about issues and, if available, information on how to solve a problem. 1.11. Service removal You can delete RHACS Cloud Service using the default delete operations from the Red Hat Hybrid Cloud Console . Deleting the RHACS Cloud Service Central instance automatically removes all RHACS components. Deleting is not reversible. 1.12. Pricing For information about subscription fees, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . 1.13. Service Level Agreement For more information about the Service Level Agreements (SLAs) offered for Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES .
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/rhacs_cloud_service/rhacs-cloud-service-service-description
Chapter 3. Uninstalling OpenShift AI
Chapter 3. Uninstalling OpenShift AI You can use Red Hat OpenShift Cluster Manager to safely uninstall Red Hat OpenShift AI from your OpenShift cluster. Prerequisites Credentials for OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). Administrator access to the OpenShift cluster. For AWS clusters, you have backed up the EBS volume containing your Persistent Volume Claims (PVCs). See Amazon Web Services documentation: Create Amazon EBS snapshots for more information. For GCP clusters, you have backed up the persistent disk containing your Persistent Volume Claims (PVCs). See Google Cloud documentation: Create and manage disk snapshots for more information. Procedure Log in to OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). Click Clusters . The Clusters page opens. Click the name of the cluster that hosts the instance OpenShift AI to uninstall. The Details page for the cluster opens. Click the Add-ons tab and locate the Red Hat OpenShift Data Science tile. Click Uninstall . This process takes approximately 30 minutes to complete. Do not manually delete any resources while uninstalling OpenShift AI, as this can interfere with the uninstall process. OpenShift AI is uninstalled and any persistent volume claims (PVCs) associated with your OpenShift AI instance are deleted. However, any user groups for OpenShift AI that you previously created remain on your cluster. Verification In OpenShift Cluster Manager, on the Add-ons tab for the cluster, confirm that the OpenShift Data Science tile does not show the Installed state. In your OpenShift cluster, click Home Projects and confirm that the following project namespaces are not visible: redhat-ods-applications redhat-ods-monitoring redhat-ods-operator Additional resources Amazon Web Services documentation: Create Amazon EBS snapshots Google Cloud documentation: Create and manage disk snapshots Deleting users and user resources
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/uninstalling_openshift_ai_cloud_service/uninstalling-openshift-ai_install
Appendix A. BNF for SQL Grammar
Appendix A. BNF for SQL Grammar A.1. Main Entry Points callable statement ddl statement procedure body definition directly executable statement
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/appe-bnf_for_sql_grammar
Chapter 7. Related Information
Chapter 7. Related Information Linux System Roles upstream project Red Hat Enterprise Linux (RHEL) System Roles (Red Hat Knowledge Base Article) Installing SAP HANA or SAP S/4HANA with the RHEL System Roles for SAP (Red Hat Knowledge Base Article) RHEL System Roles for SAP v.1 RHEL System Roles for SAP v.2
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/red_hat_enterprise_linux_system_roles_for_sap/ref_related-information_rhel-system-roles-for-sap-9
Managing configurations by using Ansible integration
Managing configurations by using Ansible integration Red Hat Satellite 6.16 Configure Ansible integration in Satellite and use Ansible roles and playbooks to configure your hosts Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_ansible_integration/index
11.2.2. Specific ifcfg Options for Linux on System z
11.2.2. Specific ifcfg Options for Linux on System z SUBCHANNELS= <read_device_bus_id> , <write_device_bus_id> , <data_device_bus_id> where <read_device_bus_id> , <write_device_bus_id> , and <data_device_bus_id> are the three device bus IDs representing a network device. PORTNAME= myname; where myname is the Open Systems Adapter (OSA) portname or LAN Channel Station (LCS) portnumber. CTCPROT = answer where answer is one of the following: 0 - Compatibility mode, TCP/IP for Virtual Machines (used with non-Linux peers other than IBM S/390 and IBM System z operating systems). This is the default mode. 1 - Extended mode, used for Linux-to-Linux Peers. 3 - Compatibility mode for S/390 and IBM System z operating systems. This directive is used in conjunction with the NETTYPE directive. It specifies the CTC protocol for NETTYPE='ctc'. The default is 0. OPTION = 'answer' where 'answer' is a quoted string of any valid sysfs attributes and their value. The Red Hat Enterprise Linux installer currently uses this to configure the layer mode, (layer2), and the relative port number, (portno), of QETH devices. For example:
[ "OPTIONS='layer2=1 portno=0'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch11s02s02
Chapter 4. Using Quality of Service (QoS) policies to manage data traffic
Chapter 4. Using Quality of Service (QoS) policies to manage data traffic You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Platform (RHOSP) networks. You can apply QoS policies to individual ports, or apply QoS policies to a project network, where ports with no specific policy attached inherit the policy. Note Internal network owned ports, such as DHCP and internal router ports, are excluded from network policy application. You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum bandwidth QoS policies, you can only apply modifications when there are no instances that use any of the ports the policy is assigned to. 4.1. Controlling minimum bandwidth by using QoS policies For the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron), a guaranteed minimum bandwidth QoS rule can be enforced in two distinct contexts: Networking service back-end enforcement and resource allocation scheduling enforcement. The network back end, ML2/OVN or ML2/SR-IOV, attempts to guarantee that each port on which the rule is applied has no less than the specified network bandwidth. When you use resource allocation scheduling bandwidth enforcement, the Compute service (nova) only places VM instances on hosts that support the minimum bandwidth. You can apply QoS minimum bandwidth rules using Networking service back-end enforcement, resource allocation scheduling enforcement, or both. The following table identifies the Modular Layer 2 (ML2) mechanism drivers that support minimum bandwidth QoS policies: Table 4.1. ML2 mechanism drivers that support minimum bandwidth QoS ML2 mechanism driver Agent VNIC types ML2/OVN (Not applicable) normal ML2/SR-IOV sriovnicswitch direct Additional resources Section 4.1.1, "Using Networking service back-end enforcement to enforce minimum bandwidth" Section 4.1.2, "Scheduling instances by using minimum bandwidth QoS policies" 4.1.1. Using Networking service back-end enforcement to enforce minimum bandwidth You can guarantee a minimum bandwidth for network traffic for ports by applying Red Hat OpenStack Services on OpenShift (RHOSO) quality of service (QoS) policies to the ports. These ports must be backed by a flat or VLAN physical network. Note Currently, the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN) does not support minimum bandwidth QoS rules. Prerequisites Your administrator has enabled the Networking service with the qos service plug-in. (The plug-in is loaded by default.) The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Do not mix ports with and without bandwidth guarantees on the same physical interface, because this might cause denial of necessary resources (starvation) to the ports without a guarantee. Tip Create host aggregates to separate ports with bandwidth guarantees from those ports without bandwidth guarantees. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Confirm that the qos service plug-in is loaded in the Networking service: If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you must load the qos services plug-in before you can continue. For more information, see your RHOSO administrator. Identify the ID of the project you want to create the QoS policy for: Sample output +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+ Using the project ID from the step, create a QoS policy for the project. Example In this example, a QoS policy named guaranteed_min_bw is created for the admin project: USD openstack network qos policy create --share \ --project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw Configure the rules for the policy. Example In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps are created for the policy named guaranteed_min_bw : USD openstack network qos rule create \ --type minimum-bandwidth --min-kbps 40000000 \ --ingress guaranteed_min_bw USD openstack network qos rule create \ --type minimum-bandwidth --min-kbps 40000000 \ --egress guaranteed_min_bw Configure a port to apply the policy to. Example In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12 : USD openstack port set --qos-policy guaranteed_min_bw \ 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12 Verification ML2/SR-IOV Using root access, log in to the Compute node, and show the details of the virtual functions that are held in the physical function. Example Sample output Additional resources network qos policy create in the Command line interface reference network qos rule create in the Command line interface reference port set in the Command line interface reference 4.1.2. Scheduling instances by using minimum bandwidth QoS policies You can apply a minimum bandwidth QoS policy to a port to guarantee that the host on which its Red Hat OpenStack Services on OpenShift (RHOSO) VM instance is spawned has a minimum network bandwidth. Prerequisites Your administrator has enabled the Networking service with the qos and the placement service plug-ins. The qos service plug-in is loaded by default. The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. The Networking service must support the following API extensions: agent-resources-synced port-resource-request qos-bw-minimum-ingress You must use the ML2/OVN or ML2/SR-IOV mechanism drivers. You can only modify a minimum bandwidth QoS policy when there are no instances using any of the ports the policy is assigned to. The Networking service cannot update the Placement API usage information if a port is bound. The Placement service must support microversion 1.29. The Compute service (nova) must support microversion 2.72. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Confirm that the qos service plug-in is loaded in the Networking service: USD openstack network qos policy list If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you must load the qos services plug-in before you can continue. For more information, see your RHOSO administrator. Identify the ID of the project you want to create the QoS policy for: USD openstack project list Sample output +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+ Using the project ID from the step, create a QoS policy for the project. Example In this example, a QoS policy named guaranteed_min_bw is created for the admin project: USD openstack network qos policy create --share \ --project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw Configure the rules for the policy. Example In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps are created for the policy named guaranteed_min_bw : USD openstack network qos rule create \ --type minimum-bandwidth --min-kbps 40000000 \ --ingress guaranteed_min_bw USD openstack network qos rule create \ --type minimum-bandwidth --min-kbps 40000000 \ --egress guaranteed_min_bw Configure a port to apply the policy to. Example In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12 : USD openstack port set --qos-policy guaranteed_min_bw \ 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12 Additional resources network qos policy create in the Command line interface reference network qos rule create in the Command line interface reference port set in the Command line interface reference 4.2. Limiting network traffic by using QoS policies You can create a Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron) quality of service (QoS) policy that limits the bandwidth on your RHOSP networks, ports, floating IPs, or gateway IPs (technology preview) and drops any traffic that exceeds the specified rate. Prerequisites Your administrator has enabled the Networking service with the qos service plug-in. (The plug-in is loaded by default.) The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Confirm that the qos service plug-in is loaded in the Networking service: USD openstack network qos policy list If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you must load the qos services plug-in before you can continue. For more information, see your RHOSO administrator. Identify the ID of the project you want to create the QoS policy for: USD openstack project list Sample output +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+ Using the project ID from the step, create a QoS policy for the project. Example In this example, a QoS policy named bw-limiter is created for the admin project: USD openstack network qos policy create --share --project 98a2f53c20ce4d50a40dac4a38016c69 bw-limiter Configure the rules for the policy. Note You can add more than one rule to a policy, as long as the type or direction of each rule is different. For example, You can specify two bandwidth-limit rules, one with egress and one with ingress direction. Example In this example, QoS ingress and egress rules are created for the policy named bw-limiter with a bandwidth limit of 50000 kbps and a maximum burst size of 50000 kbps: USD openstack network qos rule create --type bandwidth-limit \ --max-kbps 50000 --max-burst-kbits 50000 --ingress bw-limiter USD openstack network qos rule create --type bandwidth-limit \ --max-kbps 50000 --max-burst-kbits 50000 --egress bw-limiter You can create a port with a policy attached to it, or attach a policy to a pre-existing port. Example - create a port with a policy attached In this example, the policy bw-limiter is associated with port port2 : USD openstack port create --qos-policy bw-limiter --network private port2 Sample output +-----------------------+--------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2024-09-19T19:20:24Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.0.2.210', subnet_id='292f8c-...' | | id | f51562ee-da8d-42de-9578-f6f5cb248226 | | ip_address | None | | mac_address | fa:16:3e:d9:f2:ba | | name | port2 | | network_id | 55dc2f70-0f92-4002-b343-ca34277b0234 | | option_name | None | | option_value | None | | port_security_enabled | False | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | | qos_policy_id | 8491547e-add1-4c6c-a50e-42121237256c | | revision_number | 6 | | security_group_ids | 0531cc1a-19d1-4cc7-ada5-49f8b08245be | | status | DOWN | | subnet_id | None | | tags | [] | | trunk_details | None | | updated_at | 2024-09-19T19:23:00Z | +-----------------------+--------------------------------------------------+ Example - attach a policy to a pre-existing port In this example, the policy bw-limiter is associated with port1 : USD openstack port set --qos-policy bw-limiter port1 Additional resources network qos rule create in the Command line interface reference network qos rule set in the Command line interface reference network qos rule delete in the Command line interface reference network qos rule list in the Command line interface reference 4.3. Prioritizing network traffic by using DSCP marking QoS policies You can use differentiated services code point (DSCP) to implement quality of service (QoS) policies on your Red Hat OpenStack Services on OpenShift (RHOSO) network by embedding relevant values in the IP headers. The RHOSP Networking service (neutron) QoS policies can use DSCP marking to manage only egress traffic on neutron ports and networks. Prerequisites Your administrator has enabled the Networking service with the qos service plug-in. (The plug-in is loaded by default.) The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Confirm that the qos service plug-in is loaded in the Networking service: USD openstack network qos policy list If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you must configure the Networking service before you can continue. For more information, see your RHOSO administrator. Identify the ID of the project you want to create the QoS policy for: USD openstack project list Sample output +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+ Using the project ID from the step, create a QoS policy for the project. Example In this example, a QoS policy named qos-web-servers is created for the admin project: openstack network qos policy create --project 98a2f53c20ce4d50a40dac4a38016c69 qos-web-servers Create a DSCP rule and apply it to a policy. Example In this example, a DSCP rule is created using DSCP mark 18 and is applied to the qos-web-servers policy: openstack network qos rule create --type dscp-marking --dscp-mark 18 qos-web-servers Sample output Created a new dscp_marking_rule: +-----------+--------------------------------------+ | Field | Value | +-----------+--------------------------------------+ | dscp_mark | 18 | | id | d7f976ec-7fab-4e60-af70-f59bf88198e6 | +-----------+--------------------------------------+ You can change the DSCP value assigned to a rule. Example In this example, the DSCP mark value is changed to 22 for the rule, d7f976ec-7fab-4e60-af70-f59bf88198e6 , in the qos-web-servers policy: USD openstack network qos rule set --dscp-mark 22 qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6 You can delete a DSCP rule. Example In this example, the DSCP rule, d7f976ec-7fab-4e60-af70-f59bf88198e6 , in the qos-web-servers policy is deleted: USD openstack network qos rule delete qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6 Verification Confirm that the DSCP rule is applied to the QoS policy. Example In this example, the DSCP rule, d7f976ec-7fab-4e60-af70-f59bf88198e6 is applied to the QoS policy, qos-web-servers : USD openstack network qos rule list qos-web-servers Sample output +-----------+--------------------------------------+ | dscp_mark | id | +-----------+--------------------------------------+ | 18 | d7f976ec-7fab-4e60-af70-f59bf88198e6 | +-----------+--------------------------------------+ Additional resources network qos rule create in the Command line interface reference network qos rule set in the Command line interface reference network qos rule delete in the Command line interface reference network qos rule list in the Command line interface reference 4.4. Applying QoS policies to projects by using Networking service RBAC With the Red Hat OpenStack Platform (RHOSP) Networking service (neutron), you can add a role-based access control (RBAC) for quality of service (QoS) policies. As a result, you can apply QoS policies to individual projects. Prerequisities You must have one or more QoS policies available. Procedure Create an RHOSP Networking service RBAC policy associated with a specific QoS policy, and assign it to a specific project: Example For example, you might have a QoS policy that allows for lower-priority network traffic, named bw-limiter . Using a RHOSP Networking service RBAC policy, you can apply the QoS policy to a specific project: Additional resources network rbac create in the Command line interface reference Section 4.1.1, "Using Networking service back-end enforcement to enforce minimum bandwidth" Section 4.1.2, "Scheduling instances by using minimum bandwidth QoS policies" Section 4.2, "Limiting network traffic by using QoS policies" Section 4.3, "Prioritizing network traffic by using DSCP marking QoS policies"
[ "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack network qos policy list", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network qos policy create --share --project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw", "openstack network qos rule create --type minimum-bandwidth --min-kbps 40000000 --ingress guaranteed_min_bw openstack network qos rule create --type minimum-bandwidth --min-kbps 40000000 --egress guaranteed_min_bw", "openstack port set --qos-policy guaranteed_min_bw 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12", "ip -details link show enp4s0f1", "50: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master mx-bond state UP mode DEFAULT group default qlen 1000 link/ether 98:03:9b:9d:73:74 brd ff:ff:ff:ff:ff:ff permaddr 98:03:9b:9d:73:75 promiscuity 0 minmtu 68 maxmtu 9978 bond_slave state BACKUP mii_status UP link_failure_count 0 perm_hwaddr 98:03:9b:9d:73:75 queue_id 0 addrgenmode eui64 numtxqueues 320 numrxqueues 40 gso_max_size 65536 gso_max_segs 65535 portname p1 switchid 74739d00039b0398 vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 4 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 5 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 6 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 7 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 8 link/ether fa:16:3e:2a:d2:7f brd ff:ff:ff:ff:ff:ff, tx rate 999 (Mbps), max_tx_rate 999Mbps, spoof checking off, link-state disable, trust off, query_rss off vf 9 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack network qos policy list", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network qos policy create --share --project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw", "openstack network qos rule create --type minimum-bandwidth --min-kbps 40000000 --ingress guaranteed_min_bw openstack network qos rule create --type minimum-bandwidth --min-kbps 40000000 --egress guaranteed_min_bw", "openstack port set --qos-policy guaranteed_min_bw 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack network qos policy list", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network qos policy create --share --project 98a2f53c20ce4d50a40dac4a38016c69 bw-limiter", "openstack network qos rule create --type bandwidth-limit --max-kbps 50000 --max-burst-kbits 50000 --ingress bw-limiter openstack network qos rule create --type bandwidth-limit --max-kbps 50000 --max-burst-kbits 50000 --egress bw-limiter", "openstack port create --qos-policy bw-limiter --network private port2", "+-----------------------+--------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2024-09-19T19:20:24Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.0.2.210', subnet_id='292f8c-...' | | id | f51562ee-da8d-42de-9578-f6f5cb248226 | | ip_address | None | | mac_address | fa:16:3e:d9:f2:ba | | name | port2 | | network_id | 55dc2f70-0f92-4002-b343-ca34277b0234 | | option_name | None | | option_value | None | | port_security_enabled | False | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | | qos_policy_id | 8491547e-add1-4c6c-a50e-42121237256c | | revision_number | 6 | | security_group_ids | 0531cc1a-19d1-4cc7-ada5-49f8b08245be | | status | DOWN | | subnet_id | None | | tags | [] | | trunk_details | None | | updated_at | 2024-09-19T19:23:00Z | +-----------------------+--------------------------------------------------+", "openstack port set --qos-policy bw-limiter port1", "dnf list installed python-openstackclient", "echo USDOS_CLOUD my_cloud", "export OS_CLOUD=my_other_cloud", "openstack network qos policy list", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network qos policy create --project 98a2f53c20ce4d50a40dac4a38016c69 qos-web-servers", "openstack network qos rule create --type dscp-marking --dscp-mark 18 qos-web-servers", "Created a new dscp_marking_rule: +-----------+--------------------------------------+ | Field | Value | +-----------+--------------------------------------+ | dscp_mark | 18 | | id | d7f976ec-7fab-4e60-af70-f59bf88198e6 | +-----------+--------------------------------------+", "openstack network qos rule set --dscp-mark 22 qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6", "openstack network qos rule delete qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6", "openstack network qos rule list qos-web-servers", "+-----------+--------------------------------------+ | dscp_mark | id | +-----------+--------------------------------------+ | 18 | d7f976ec-7fab-4e60-af70-f59bf88198e6 | +-----------+--------------------------------------+", "openstack network rbac create --type qos_policy --target-project <project_name | project_ID> --action access_as_shared <QoS_policy_name | QoS_policy_ID>", "openstack network rbac create --type qos_policy --target-project 80bf5732752a41128e612fe615c886c6 --action access_as_shared bw-limiter" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/managing_networking_resources/config-qos-policies_rhoso-mngnet
Chapter 7. Creating an OAuth application in GitHub
Chapter 7. Creating an OAuth application in GitHub The following sections describe how to authorize Red Hat Quay to integrate with GitHub by creating an OAuth application. This allows Red Hat Quay to access GitHub repositories on behalf of a user. OAuth integration with GitHub is primarily used to allow features like automated builds, where Red Hat Quay can be enabled to monitor specific GitHub repositories for changes like commits or pull requests, and trigger contain image builds when those changes are made. 7.1. Create new GitHub application Use the following procedure to create an OAuth application in Github. Procedure Log into GitHub Enterprise . In the navigation pane, select your username Your organizations . In the navigation pane, select Applications Developer Settings . In the navigation pane, click OAuth Apps New OAuth App . You are navigated to the following page: Enter a name for the application in the Application name textbox. In the Homepage URL textbox, enter your Red Hat Quay URL. Note If you are using public GitHub, the Homepage URL entered must be accessible by your users. It can still be an internal URL. In the Authorization callback URL , enter https://<RED_HAT_QUAY_URL>/oauth2/github/callback . Click Register application to save your settings. When the new application's summary is shown, record the Client ID and the Client Secret shown for the new application.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/builders_and_image_automation/github-app
Chapter 10. Managing Services with systemd
Chapter 10. Managing Services with systemd 10.1. Introduction to systemd Systemd is a system and service manager for Linux operating systems. It is designed to be backwards compatible with SysV init scripts, and provides a number of features such as parallel startup of system services at boot time, on-demand activation of daemons, or dependency-based service control logic. In Red Hat Enterprise Linux 7, systemd replaces Upstart as the default init system. Systemd introduces the concept of systemd units . These units are represented by unit configuration files located in one of the directories listed in Table 10.2, "Systemd Unit Files Locations" , and encapsulate information about system services, listening sockets, and other objects that are relevant to the init system. For a complete list of available systemd unit types, see Table 10.1, "Available systemd Unit Types" . Table 10.1. Available systemd Unit Types Unit Type File Extension Description Service unit .service A system service. Target unit .target A group of systemd units. Automount unit .automount A file system automount point. Device unit .device A device file recognized by the kernel. Mount unit .mount A file system mount point. Path unit .path A file or directory in a file system. Scope unit .scope An externally created process. Slice unit .slice A group of hierarchically organized units that manage system processes. Snapshot unit .snapshot A saved state of the systemd manager. Socket unit .socket An inter-process communication socket. Swap unit .swap A swap device or a swap file. Timer unit .timer A systemd timer. Table 10.2. Systemd Unit Files Locations Directory Description /usr/lib/systemd/system/ Systemd unit files distributed with installed RPM packages. /run/systemd/system/ Systemd unit files created at run time. This directory takes precedence over the directory with installed service unit files. /etc/systemd/system/ Systemd unit files created by systemctl enable as well as unit files added for extending a service. This directory takes precedence over the directory with runtime unit files. Overriding the Default systemd Configuration Using system.conf The default configuration of systemd is defined during the compilation and it can be found in systemd configuration file at /etc/systemd/system.conf . Use this file if you want to deviate from those defaults and override selected default values for systemd units globally. For example, to override the default value of the timeout limit, which is set to 90 seconds, use the DefaultTimeoutStartSec parameter to input the required value in seconds. See also Example 10.21, "Changing the timeout limit" . 10.1.1. Main Features In Red Hat Enterprise Linux 7, the systemd system and service manager provides the following main features: Socket-based activation - At boot time, systemd creates listening sockets for all system services that support this type of activation, and passes the sockets to these services as soon as they are started. This not only allows systemd to start services in parallel, but also makes it possible to restart a service without losing any message sent to it while it is unavailable: the corresponding socket remains accessible and all messages are queued. Systemd uses socket units for socket-based activation. Bus-based activation - System services that use D-Bus for inter-process communication can be started on-demand the first time a client application attempts to communicate with them. Systemd uses D-Bus service files for bus-based activation. Device-based activation - System services that support device-based activation can be started on-demand when a particular type of hardware is plugged in or becomes available. Systemd uses device units for device-based activation. Path-based activation - System services that support path-based activation can be started on-demand when a particular file or directory changes its state. Systemd uses path units for path-based activation. Mount and automount point management - Systemd monitors and manages mount and automount points. Systemd uses mount units for mount points and automount units for automount points. Aggressive parallelization - Because of the use of socket-based activation, systemd can start system services in parallel as soon as all listening sockets are in place. In combination with system services that support on-demand activation, parallel activation significantly reduces the time required to boot the system. Transactional unit activation logic - Before activating or deactivating a unit, systemd calculates its dependencies, creates a temporary transaction, and verifies that this transaction is consistent. If a transaction is inconsistent, systemd automatically attempts to correct it and remove non-essential jobs from it before reporting an error. Backwards compatibility with SysV init - Systemd supports SysV init scripts as described in the Linux Standard Base Core Specification , which eases the upgrade path to systemd service units. 10.1.2. Compatibility Changes The systemd system and service manager is designed to be mostly compatible with SysV init and Upstart. The following are the most notable compatibility changes with regards to the major release of the Red Hat Enterprise Linux system: Systemd has only limited support for runlevels. It provides a number of target units that can be directly mapped to these runlevels and for compatibility reasons, it is also distributed with the earlier runlevel command. Not all systemd targets can be directly mapped to runlevels, however, and as a consequence, this command might return N to indicate an unknown runlevel. It is recommended that you avoid using the runlevel command if possible. For more information about systemd targets and their comparison with runlevels, see Section 10.3, "Working with systemd Targets" . The systemctl utility does not support custom commands. In addition to standard commands such as start , stop , and status , authors of SysV init scripts could implement support for any number of arbitrary commands in order to provide additional functionality. For example, the init script for iptables in Red Hat Enterprise Linux 6 could be executed with the panic command, which immediately enabled panic mode and reconfigured the system to start dropping all incoming and outgoing packets. This is not supported in systemd and the systemctl only accepts documented commands. For more information about the systemctl utility and its comparison with the earlier service utility, see Section 10.2, "Managing System Services" . The systemctl utility does not communicate with services that have not been started by systemd. When systemd starts a system service, it stores the ID of its main process in order to keep track of it. The systemctl utility then uses this PID to query and manage the service. Consequently, if a user starts a particular daemon directly on the command line, systemctl is unable to determine its current status or stop it. Systemd stops only running services. Previously, when the shutdown sequence was initiated, Red Hat Enterprise Linux 6 and earlier releases of the system used symbolic links located in the /etc/rc0.d/ directory to stop all available system services regardless of their status. With systemd, only running services are stopped on shutdown. System services are unable to read from the standard input stream. When systemd starts a service, it connects its standard input to /dev/null to prevent any interaction with the user. System services do not inherit any context (such as the HOME and PATH environment variables) from the invoking user and their session. Each service runs in a clean execution context. When loading a SysV init script, systemd reads dependency information encoded in the Linux Standard Base (LSB) header and interprets it at run time. All operations on service units are subject to a default timeout of 5 minutes to prevent a malfunctioning service from freezing the system. This value is hardcoded for services that are generated from initscripts and cannot be changed. However, individual configuration files can be used to specify a longer timeout value per service, see Example 10.21, "Changing the timeout limit" For a detailed list of compatibility changes introduced with systemd, see the Migration Planning Guide for Red Hat Enterprise Linux 7. 10.2. Managing System Services Note To expand your expertise, you might also be interested in the Red Hat System Administration II (RH134) training course. versions of Red Hat Enterprise Linux, which were distributed with SysV init or Upstart, used init scripts located in the /etc/rc.d/init.d/ directory. These init scripts were typically written in Bash, and allowed the system administrator to control the state of services and daemons in their system. In Red Hat Enterprise Linux 7, these init scripts have been replaced with service units . Service units end with the .service file extension and serve a similar purpose as init scripts. To view, start, stop, restart, enable, or disable system services, use the systemctl command as described in Table 10.3, "Comparison of the service Utility with systemctl" , Table 10.4, "Comparison of the chkconfig Utility with systemctl" , and further in this section. The service and chkconfig commands are still available in the system and work as expected, but are only included for compatibility reasons and should be avoided. Table 10.3. Comparison of the service Utility with systemctl service systemctl Description service name start systemctl start name .service Starts a service. service name stop systemctl stop name .service Stops a service. service name restart systemctl restart name .service Restarts a service. service name condrestart systemctl try-restart name .service Restarts a service only if it is running. service name reload systemctl reload name .service Reloads configuration. service name status systemctl status name .service systemctl is-active name .service Checks if a service is running. service --status-all systemctl list-units --type service --all Displays the status of all services. Table 10.4. Comparison of the chkconfig Utility with systemctl chkconfig systemctl Description chkconfig name on systemctl enable name .service Enables a service. chkconfig name off systemctl disable name .service Disables a service. chkconfig --list name systemctl status name .service systemctl is-enabled name .service Checks if a service is enabled. chkconfig --list systemctl list-unit-files --type service Lists all services and checks if they are enabled. chkconfig --list systemctl list-dependencies --after Lists services that are ordered to start before the specified unit. chkconfig --list systemctl list-dependencies --before Lists services that are ordered to start after the specified unit. Specifying Service Units For clarity, all command examples in the rest of this section use full unit names with the .service file extension, for example: However, the file extension can be omitted, in which case the systemctl utility assumes the argument is a service unit. The following command is equivalent to the one above: Additionally, some units have alias names. Those names can have shorter names than units, which can be used instead of the actual unit names. To find all aliases that can be used for a particular unit, use: Behavior of systemctl in a chroot Environment If you change the root directory using the chroot command, most systemctl commands refuse to perform any action. The reason for this is that the systemd process and the user that used the chroot command do not have the same view of the filesystem. This happens, for example, when systemctl is invoked from a kickstart file. The exception to this are unit file commands such as the systemctl enable and systemctl disable commands. These commands do not need a running system and do not affect running processes, but they do affect unit files. Therefore, you can run these commands even in chroot environment. For example, to enable the httpd service on a system under the /srv/website1/ directory: 10.2.1. Listing Services To list all currently loaded service units, type the following at a shell prompt: For each service unit file, this command displays its full name ( UNIT ) followed by a note whether the unit file has been loaded ( LOAD ), its high-level ( ACTIVE ) and low-level ( SUB ) unit file activation state, and a short description ( DESCRIPTION ). By default, the systemctl list-units command displays only active units. If you want to list all loaded units regardless of their state, run this command with the --all or -a command line option: You can also list all available service units to see if they are enabled. To do so, type: For each service unit, this command displays its full name ( UNIT FILE ) followed by information whether the service unit is enabled or not ( STATE ). For information on how to determine the status of individual service units, see Section 10.2.2, "Displaying Service Status" . Example 10.1. Listing Services To list all currently loaded service units, run the following command: To list all installed service unit files to determine if they are enabled, type: 10.2.2. Displaying Service Status To display detailed information about a service unit that corresponds to a system service, type the following at a shell prompt: Replace name with the name of the service unit you want to inspect (for example, gdm ). This command displays the name of the selected service unit followed by its short description, one or more fields described in Table 10.5, "Available Service Unit Information" , and if it is executed by the root user, also the most recent log entries. Table 10.5. Available Service Unit Information Field Description Loaded Information whether the service unit has been loaded, the absolute path to the unit file, and a note whether the unit is enabled. Active Information whether the service unit is running followed by a time stamp. Main PID The PID of the corresponding system service followed by its name. Status Additional information about the corresponding system service. Process Additional information about related processes. CGroup Additional information about related Control Groups (cgroups). To only verify that a particular service unit is running, run the following command: Similarly, to determine whether a particular service unit is enabled, type: Note that both systemctl is-active and systemctl is-enabled return an exit status of 0 if the specified service unit is running or enabled. For information on how to list all currently loaded service units, see Section 10.2.1, "Listing Services" . Example 10.2. Displaying Service Status The service unit for the GNOME Display Manager is named gdm.service . To determine the current status of this service unit, type the following at a shell prompt: Example 10.3. Displaying Services Ordered to Start Before a Service To determine what services are ordered to start before the specified service, type the following at a shell prompt: Example 10.4. Displaying Services Ordered to Start After a Service To determine what services are ordered to start after the specified service, type the following at a shell prompt: 10.2.3. Starting a Service To start a service unit that corresponds to a system service, type the following at a shell prompt as root : Replace name with the name of the service unit you want to start (for example, gdm ). This command starts the selected service unit in the current session. For information on how to enable a service unit to be started at boot time, see Section 10.2.6, "Enabling a Service" . For information on how to determine the status of a certain service unit, see Section 10.2.2, "Displaying Service Status" . Example 10.5. Starting a Service The service unit for the Apache HTTP Server is named httpd.service . To activate this service unit and start the httpd daemon in the current session, run the following command as root : 10.2.4. Stopping a Service To stop a service unit that corresponds to a system service, type the following at a shell prompt as root : Replace name with the name of the service unit you want to stop (for example, bluetooth ). This command stops the selected service unit in the current session. For information on how to disable a service unit and prevent it from being started at boot time, see Section 10.2.7, "Disabling a Service" . For information on how to determine the status of a certain service unit, see Section 10.2.2, "Displaying Service Status" . Example 10.6. Stopping a Service The service unit for the bluetoothd daemon is named bluetooth.service . To deactivate this service unit and stop the bluetoothd daemon in the current session, run the following command as root : 10.2.5. Restarting a Service To restart a service unit that corresponds to a system service, type the following at a shell prompt as root : Replace name with the name of the service unit you want to restart (for example, httpd ). This command stops the selected service unit in the current session and immediately starts it again. Importantly, if the selected service unit is not running, this command starts it too. To tell systemd to restart a service unit only if the corresponding service is already running, run the following command as root : Certain system services also allow you to reload their configuration without interrupting their execution. To do so, type as root : Note that system services that do not support this feature ignore this command altogether. For convenience, the systemctl command also supports the reload-or-restart and reload-or-try-restart commands that restart such services instead. For information on how to determine the status of a certain service unit, see Section 10.2.2, "Displaying Service Status" . Example 10.7. Restarting a Service In order to prevent users from encountering unnecessary error messages or partially rendered web pages, the Apache HTTP Server allows you to edit and reload its configuration without the need to restart it and interrupt actively processed requests. To do so, type the following at a shell prompt as root : 10.2.6. Enabling a Service To configure a service unit that corresponds to a system service to be automatically started at boot time, type the following at a shell prompt as root : Replace name with the name of the service unit you want to enable (for example, httpd ). This command reads the [Install] section of the selected service unit and creates appropriate symbolic links to the /usr/lib/systemd/system/ name .service file in the /etc/systemd/system/ directory and its subdirectories. This command does not, however, rewrite links that already exist. If you want to ensure that the symbolic links are re-created, use the following command as root : This command disables the selected service unit and immediately enables it again. For information on how to determine whether a certain service unit is enabled to start at boot time, see Section 10.2.2, "Displaying Service Status" . For information on how to start a service in the current session, see Section 10.2.3, "Starting a Service" . Example 10.8. Enabling a Service To configure the Apache HTTP Server to start automatically at boot time, run the following command as root : 10.2.7. Disabling a Service To prevent a service unit that corresponds to a system service from being automatically started at boot time, type the following at a shell prompt as root : Replace name with the name of the service unit you want to disable (for example, bluetooth ). This command reads the [Install] section of the selected service unit and removes appropriate symbolic links to the /usr/lib/systemd/system/ name .service file from the /etc/systemd/system/ directory and its subdirectories. In addition, you can mask any service unit to prevent it from being started manually or by another service. To do so, run the following command as root : This command replaces the /etc/systemd/system/ name .service file with a symbolic link to /dev/null , rendering the actual unit file inaccessible to systemd. To revert this action and unmask a service unit, type as root : For information on how to determine whether a certain service unit is enabled to start at boot time, see Section 10.2.2, "Displaying Service Status" . For information on how to stop a service in the current session, see Section 10.2.4, "Stopping a Service" . Example 10.9. Disabling a Service Example 10.6, "Stopping a Service" illustrates how to stop the bluetooth.service unit in the current session. To prevent this service unit from starting at boot time, type the following at a shell prompt as root : 10.2.8. Starting a Conflicting Service In systemd , positive and negative dependencies between services exist. Starting particular service may require starting one or more other services (positive dependency) or stopping one or more services (negative dependency). When you attempt to start a new service, systemd resolves all dependencies automatically. Note that this is done without explicit notification to the user. If you are already running a service, and you attempt to start another service with a negative dependency, the first service is automatically stopped. For example, if you are running the postfix service, and you try to start the sendmail service, systemd first automatically stops postfix , because these two services are conflicting and cannot run on the same port. 10.3. Working with systemd Targets versions of Red Hat Enterprise Linux, which were distributed with SysV init or Upstart, implemented a predefined set of runlevels that represented specific modes of operation. These runlevels were numbered from 0 to 6 and were defined by a selection of system services to be run when a particular runlevel was enabled by the system administrator. In Red Hat Enterprise Linux 7, the concept of runlevels has been replaced with systemd targets . Systemd targets are represented by target units . Target units end with the .target file extension and their only purpose is to group together other systemd units through a chain of dependencies. For example, the graphical.target unit, which is used to start a graphical session, starts system services such as the GNOME Display Manager ( gdm.service ) or Accounts Service ( accounts-daemon.service ) and also activates the multi-user.target unit. Similarly, the multi-user.target unit starts other essential system services such as NetworkManager ( NetworkManager.service ) or D-Bus ( dbus.service ) and activates another target unit named basic.target . Red Hat Enterprise Linux 7 is distributed with a number of predefined targets that are more or less similar to the standard set of runlevels from the releases of this system. For compatibility reasons, it also provides aliases for these targets that directly map them to SysV runlevels. Table 10.6, "Comparison of SysV Runlevels with systemd Targets" provides a complete list of SysV runlevels and their corresponding systemd targets. Table 10.6. Comparison of SysV Runlevels with systemd Targets Runlevel Target Units Description 0 runlevel0.target , poweroff.target Shut down and power off the system. 1 runlevel1.target , rescue.target Set up a rescue shell. 2 runlevel2.target , multi-user.target Set up a non-graphical multi-user system. 3 runlevel3.target , multi-user.target Set up a non-graphical multi-user system. 4 runlevel4.target , multi-user.target Set up a non-graphical multi-user system. 5 runlevel5.target , graphical.target Set up a graphical multi-user system. 6 runlevel6.target , reboot.target Shut down and reboot the system. To view, change, or configure systemd targets, use the systemctl utility as described in Table 10.7, "Comparison of SysV init Commands with systemctl" and in the sections below. The runlevel and telinit commands are still available in the system and work as expected, but are only included for compatibility reasons and should be avoided. Table 10.7. Comparison of SysV init Commands with systemctl Old Command New Command Description runlevel systemctl list-units --type target Lists currently loaded target units. telinit runlevel systemctl isolate name .target Changes the current target. 10.3.1. Viewing the Default Target To determine which target unit is used by default, run the following command: This command resolves the symbolic link located at /etc/systemd/system/default.target and displays the result. For information on how to change the default target, see Section 10.3.3, "Changing the Default Target" . For information on how to list all currently loaded target units, see Section 10.3.2, "Viewing the Current Target" . Example 10.10. Viewing the Default Target To display the default target unit, type: 10.3.2. Viewing the Current Target To list all currently loaded target units, type the following command at a shell prompt: For each target unit, this commands displays its full name ( UNIT ) followed by a note whether the unit has been loaded ( LOAD ), its high-level ( ACTIVE ) and low-level ( SUB ) unit activation state, and a short description ( DESCRIPTION ). By default, the systemctl list-units command displays only active units. If you want to list all loaded units regardless of their state, run this command with the --all or -a command line option: See Section 10.3.1, "Viewing the Default Target" for information on how to display the default target. For information on how to change the current target, see Section 10.3.4, "Changing the Current Target" . Example 10.11. Viewing the Current Target To list all currently loaded target units, run the following command: 10.3.3. Changing the Default Target To configure the system to use a different target unit by default, type the following at a shell prompt as root : Replace name with the name of the target unit you want to use by default (for example, multi-user ). This command replaces the /etc/systemd/system/default.target file with a symbolic link to /usr/lib/systemd/system/ name .target , where name is the name of the target unit you want to use. For information on how to change the current target, see Section 10.3.4, "Changing the Current Target" . For information on how to list all currently loaded target units, see Section 10.3.2, "Viewing the Current Target" . Example 10.12. Changing the Default Target To configure the system to use the multi-user.target unit by default, run the following command as root : 10.3.4. Changing the Current Target To change to a different target unit in the current session, type the following at a shell prompt as root : Replace name with the name of the target unit you want to use (for example, multi-user ). This command starts the target unit named name and all dependent units, and immediately stops all others. For information on how to change the default target, see Section 10.3.3, "Changing the Default Target" . For information on how to list all currently loaded target units, see Section 10.3.2, "Viewing the Current Target" . Example 10.13. Changing the Current Target To turn off the graphical user interface and change to the multi-user.target unit in the current session, run the following command as root : 10.3.5. Changing to Rescue Mode Rescue mode provides a convenient single-user environment and allows you to repair your system in situations when it is unable to complete a regular booting process. In rescue mode, the system attempts to mount all local file systems and start some important system services, but it does not activate network interfaces or allow more users to be logged into the system at the same time. In Red Hat Enterprise Linux 7, rescue mode is equivalent to single user mode and requires the root password. To change the current target and enter rescue mode in the current session, type the following at a shell prompt as root : This command is similar to systemctl isolate rescue.target , but it also sends an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run this command with the --no-wall command line option: For information on how to enter emergency mode, see Section 10.3.6, "Changing to Emergency Mode" . Example 10.14. Changing to Rescue Mode To enter rescue mode in the current session, run the following command as root : 10.3.6. Changing to Emergency Mode Emergency mode provides the most minimal environment possible and allows you to repair your system even in situations when the system is unable to enter rescue mode. In emergency mode, the system mounts the root file system only for reading, does not attempt to mount any other local file systems, does not activate network interfaces, and only starts a few essential services. In Red Hat Enterprise Linux 7, emergency mode requires the root password. To change the current target and enter emergency mode, type the following at a shell prompt as root : This command is similar to systemctl isolate emergency.target , but it also sends an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run this command with the --no-wall command line option: For information on how to enter rescue mode, see Section 10.3.5, "Changing to Rescue Mode" . Example 10.15. Changing to Emergency Mode To enter emergency mode without sending a message to all users that are currently logged into the system, run the following command as root : 10.4. Shutting Down, Suspending, and Hibernating the System In Red Hat Enterprise Linux 7, the systemctl utility replaces a number of power management commands used in versions of the Red Hat Enterprise Linux system. The commands listed in Table 10.8, "Comparison of Power Management Commands with systemctl" are still available in the system for compatibility reasons, but it is advised that you use systemctl when possible. Table 10.8. Comparison of Power Management Commands with systemctl Old Command New Command Description halt systemctl halt Halts the system. poweroff systemctl poweroff Powers off the system. reboot systemctl reboot Restarts the system. pm-suspend systemctl suspend Suspends the system. pm-hibernate systemctl hibernate Hibernates the system. pm-suspend-hybrid systemctl hybrid-sleep Hibernates and suspends the system. 10.4.1. Shutting Down the System The systemctl utility provides commands for shutting down the system, however the traditional shutdown command is also supported. Although the shutdown command will call the systemctl utility to perform the shutdown, it has an advantage in that it also supports a time argument. This is particularly useful for scheduled maintenance and to allow more time for users to react to the warning that a system shutdown has been scheduled. The option to cancel the shutdown can also be an advantage. Using systemctl Commands To shut down the system and power off the machine, type the following at a shell prompt as root : To shut down and halt the system without powering off the machine, run the following command as root : By default, running either of these commands causes systemd to send an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run the selected command with the --no-wall command line option, for example: Using the shutdown Command To shut down the system and power off the machine at a certain time, use a command in the following format as root : Where hh:mm is the time in 24 hour clock format. The /run/nologin file is created 5 minutes before system shutdown to prevent new logins. When a time argument is used, an optional message, the wall message , can be appended to the command. To shut down and halt the system after a delay, without powering off the machine, use a command in the following format as root : Where +m is the delay time in minutes. The now keyword is an alias for +0 . A pending shutdown can be canceled by the root user as follows: See the shutdown(8) manual page for further command options. 10.4.2. Restarting the System To restart the system, run the following command as root : By default, this command causes systemd to send an informative message to all users that are currently logged into the system. To prevent systemd from sending this message, run this command with the --no-wall command line option: 10.4.3. Suspending the System To suspend the system, type the following at a shell prompt as root : This command saves the system state in RAM and with the exception of the RAM module, powers off most of the devices in the machine. When you turn the machine back on, the system then restores its state from RAM without having to boot again. Because the system state is saved in RAM and not on the hard disk, restoring the system from suspend mode is significantly faster than restoring it from hibernation, but as a consequence, a suspended system state is also vulnerable to power outages. For information on how to hibernate the system, see Section 10.4.4, "Hibernating the System" . 10.4.4. Hibernating the System To hibernate the system, type the following at a shell prompt as root : This command saves the system state on the hard disk drive and powers off the machine. When you turn the machine back on, the system then restores its state from the saved data without having to boot again. Because the system state is saved on the hard disk and not in RAM, the machine does not have to maintain electrical power to the RAM module, but as a consequence, restoring the system from hibernation is significantly slower than restoring it from suspend mode. To hibernate and suspend the system, run the following command as root : For information on how to suspend the system, see Section 10.4.3, "Suspending the System" . 10.5. Controlling systemd on a Remote Machine In addition to controlling the systemd system and service manager locally, the systemctl utility also allows you to interact with systemd running on a remote machine over the SSH protocol. Provided that the sshd service on the remote machine is running, you can connect to this machine by running the systemctl command with the --host or -H command line option: Replace user_name with the name of the remote user, host_name with the machine's host name, and command with any of the systemctl commands described above. Note that the remote machine must be configured to allow the selected user remote access over the SSH protocol. For more information on how to configure an SSH server, see Chapter 12, OpenSSH . Example 10.16. Remote Management To log in to a remote machine named server-01.example.com as the root user and determine the current status of the httpd.service unit, type the following at a shell prompt: 10.6. Creating and Modifying systemd Unit Files A unit file contains configuration directives that describe the unit and define its behavior. Several systemctl commands work with unit files in the background. To make finer adjustments, system administrator must edit or create unit files manually. Table 10.2, "Systemd Unit Files Locations" lists three main directories where unit files are stored on the system, the /etc/systemd/system/ directory is reserved for unit files created or customized by the system administrator. Unit file names take the following form: Here, unit_name stands for the name of the unit and type_extension identifies the unit type, see Table 10.1, "Available systemd Unit Types" for a complete list of unit types. For example, there usually is sshd.service as well as sshd.socket unit present on your system. Unit files can be supplemented with a directory for additional configuration files. For example, to add custom configuration options to sshd.service , create the sshd.service.d/custom.conf file and insert additional directives there. For more information on configuration directories, see Section 10.6.4, "Modifying Existing Unit Files" . Also, the sshd.service.wants/ and sshd.service.requires/ directories can be created. These directories contain symbolic links to unit files that are dependencies of the sshd service. The symbolic links are automatically created either during installation according to [Install] unit file options (see Table 10.11, "Important [Install] Section Options" ) or at runtime based on [Unit] options (see Table 10.9, "Important [Unit] Section Options" ). It is also possible to create these directories and symbolic links manually. Many unit file options can be set using the so called unit specifiers - wildcard strings that are dynamically replaced with unit parameters when the unit file is loaded. This enables creation of generic unit files that serve as templates for generating instantiated units. See Section 10.6.5, "Working with Instantiated Units" for details. 10.6.1. Understanding the Unit File Structure Unit files typically consist of three sections: [Unit] - contains generic options that are not dependent on the type of the unit. These options provide unit description, specify the unit's behavior, and set dependencies to other units. For a list of most frequently used [Unit] options, see Table 10.9, "Important [Unit] Section Options" . [ unit type ] - if a unit has type-specific directives, these are grouped under a section named after the unit type. For example, service unit files contain the [Service] section, see Table 10.10, "Important [Service] Section Options" for most frequently used [Service] options. [Install] - contains information about unit installation used by systemctl enable and disable commands, see Table 10.11, "Important [Install] Section Options" for a list of [Install] options. Table 10.9. Important [Unit] Section Options Option [a] section, see the systemd.unit(5) manual page.] Description Description A meaningful description of the unit. This text is displayed for example in the output of the systemctl status command. Documentation Provides a list of URIs referencing documentation for the unit. After [b] Defines the order in which units are started. The unit starts only after the units specified in After are active. Unlike Requires , After does not explicitly activate the specified units. The Before option has the opposite functionality to After . Requires Configures dependencies on other units. The units listed in Requires are activated together with the unit. If any of the required units fail to start, the unit is not activated. Wants Configures weaker dependencies than Requires . If any of the listed units does not start successfully, it has no impact on the unit activation. This is the recommended way to establish custom unit dependencies. Conflicts Configures negative dependencies, an opposite to Requires . [a] For a complete list of options configurable in the [Unit [b] In most cases, it is sufficient to set only the ordering dependencies with After and Before unit file options. If you also set a requirement dependency with Wants (recommended) or Requires , the ordering dependency still needs to be specified. That is because ordering and requirement dependencies work independently from each other. Table 10.10. Important [Service] Section Options Option [a] section, see the systemd.service(5) manual page.] Description Type Configures the unit process startup type that affects the functionality of ExecStart and related options. One of: * simple - The default value. The process started with ExecStart is the main process of the service. * forking - The process started with ExecStart spawns a child process that becomes the main process of the service. The parent process exits when the startup is complete. * oneshot - This type is similar to simple , but the process exits before starting consequent units. * dbus - This type is similar to simple , but consequent units are started only after the main process gains a D-Bus name. * notify - This type is similar to simple , but consequent units are started only after a notification message is sent via the sd_notify() function. * idle - similar to simple , the actual execution of the service binary is delayed until all jobs are finished, which avoids mixing the status output with shell output of services. ExecStart Specifies commands or scripts to be executed when the unit is started. ExecStartPre and ExecStartPost specify custom commands to be executed before and after ExecStart . Type=oneshot enables specifying multiple custom commands that are then executed sequentially. ExecStop Specifies commands or scripts to be executed when the unit is stopped. ExecReload Specifies commands or scripts to be executed when the unit is reloaded. Restart With this option enabled, the service is restarted after its process exits, with the exception of a clean stop by the systemctl command. RemainAfterExit If set to True, the service is considered active even when all its processes exited. Default value is False. This option is especially useful if Type=oneshot is configured. [a] For a complete list of options configurable in the [Service Table 10.11. Important [Install] Section Options Option [a] section, see the systemd.unit(5) manual page.] Description Alias Provides a space-separated list of additional names for the unit. Most systemctl commands, excluding systemctl enable , can use aliases instead of the actual unit name. RequiredBy A list of units that depend on the unit. When this unit is enabled, the units listed in RequiredBy gain a Require dependency on the unit. WantedBy A list of units that weakly depend on the unit. When this unit is enabled, the units listed in WantedBy gain a Want dependency on the unit. Also Specifies a list of units to be installed or uninstalled along with the unit. DefaultInstance Limited to instantiated units, this option specifies the default instance for which the unit is enabled. See Section 10.6.5, "Working with Instantiated Units" [a] For a complete list of options configurable in the [Install A whole range of options that can be used to fine tune the unit configuration, Example 10.17, "postfix.service Unit File" shows an example of a service unit installed on the system. Moreover, unit file options can be defined in a way that enables dynamic creation of units as described in Section 10.6.5, "Working with Instantiated Units" . Example 10.17. postfix.service Unit File What follows is the content of the /usr/lib/systemd/system/postfix.service unit file as currently provided by the postfix package: The [Unit] section describes the service, specifies the ordering dependencies, as well as conflicting units. In [Service], a sequence of custom scripts is specified to be executed during unit activation, on stop, and on reload. EnvironmentFile points to the location where environment variables for the service are defined, PIDFile specifies a stable PID for the main process of the service. Finally, the [Install] section lists units that depend on the service. 10.6.2. Creating Custom Unit Files There are several use cases for creating unit files from scratch: you could run a custom daemon, create a second instance of some existing service (as in Example 10.19, "Creating a second instance of the sshd service" ), or import a SysV init script (more in Section 10.6.3, "Converting SysV Init Scripts to Unit Files" ). On the other hand, if you intend just to modify or extend the behavior of an existing unit, use the instructions from Section 10.6.4, "Modifying Existing Unit Files" . The following procedure describes the general process of creating a custom service: Prepare the executable file with the custom service. This can be a custom-created script, or an executable delivered by a software provider. If required, prepare a PID file to hold a constant PID for the main process of the custom service. It is also possible to include environment files to store shell variables for the service. Make sure the source script is executable (by executing the chmod a+x ) and is not interactive. Create a unit file in the /etc/systemd/system/ directory and make sure it has correct file permissions. Execute as root : Replace name with a name of the service to be created. Note that file does not need to be executable. Open the name .service file created in the step, and add the service configuration options. There is a variety of options that can be used depending on the type of service you wish to create, see Section 10.6.1, "Understanding the Unit File Structure" . The following is an example unit configuration for a network-related service: Where: service_description is an informative description that is displayed in journal log files and in the output of the systemctl status command. the After setting ensures that the service is started only after the network is running. Add a space-separated list of other relevant services or targets. path_to_executable stands for the path to the actual service executable. Type=forking is used for daemons that make the fork system call. The main process of the service is created with the PID specified in path_to_pidfile . Find other startup types in Table 10.10, "Important [Service] Section Options" . WantedBy states the target or targets that the service should be started under. Think of these targets as of a replacement of the older concept of runlevels, see Section 10.3, "Working with systemd Targets" for details. Notify systemd that a new name .service file exists by executing the following command as root : Warning Always run the systemctl daemon-reload command after creating new unit files or modifying existing unit files. Otherwise, the systemctl start or systemctl enable commands could fail due to a mismatch between states of systemd and actual service unit files on disk. The name .service unit can now be managed as any other system service with commands described in Section 10.2, "Managing System Services" . Example 10.18. Creating the emacs.service File When using the Emacs text editor, it is often faster and more convenient to have it running in the background instead of starting a new instance of the program whenever editing a file. The following steps show how to create a unit file for Emacs, so that it can be handled like a service. Create a unit file in the /etc/systemd/system/ directory and make sure it has the correct file permissions. Execute as root : Add the following content to the file: With the above configuration, the /usr/bin/emacs executable is started in daemon mode on service start. The SSH_AUTH_SOCK environment variable is set using the "%t" unit specifier that stands for the runtime directory. The service also restarts the emacs process if it exits unexpectedly. Execute the following commands to reload the configuration and start the custom service: As the editor is now registered as a systemd service, you can use all standard systemctl commands. For example, run systemctl status emacs to display the editor's status or systemctl enable emacs to make the editor start automatically on system boot. Example 10.19. Creating a second instance of the sshd service System Administrators often need to configure and run multiple instances of a service. This is done by creating copies of the original service configuration files and modifying certain parameters to avoid conflicts with the primary instance of the service. The following procedure shows how to create a second instance of the sshd service: Create a copy of the sshd_config file that will be used by the second daemon: Edit the sshd-second_config file created in the step to assign a different port number and PID file to the second daemon: See the sshd_config (5) manual page for more information on Port and PidFile options. Make sure the port you choose is not in use by any other service. The PID file does not have to exist before running the service, it is generated automatically on service start. Create a copy of the systemd unit file for the sshd service: Alter the sshd-second.service created in the step as follows: Modify the Description option: Add sshd.service to services specified in the After option, so that the second instance starts only after the first one has already started: The first instance of sshd includes key generation, therefore remove the ExecStartPre=/usr/sbin/sshd-keygen line. Add the -f /etc/ssh/sshd-second_config parameter to the sshd command, so that the alternative configuration file is used: After the above modifications, the sshd-second.service should look as follows: If using SELinux, add the port for the second instance of sshd to SSH ports, otherwise the second instance of sshd will be rejected to bind to the port: Enable sshd-second.service, so that it starts automatically upon boot: Verify if the sshd-second.service is running by using the systemctl status command. Also, verify if the port is enabled correctly by connecting to the service: If the firewall is in use, make sure that it is configured appropriately in order to allow connections to the second instance of sshd. To learn how to properly choose a target for ordering and dependencies of your custom unit files, see the following articles How to write a service unit file which enforces that particular services have to be started How to decide what dependencies a systemd service unit definition should have Additional information with some real-world examples of cases triggered by the ordering and dependencies in a unit file is available in the following article: Is there any useful information about writing unit files? If you want to set limits for services started by systemd , see the Red Hat Knowledgebase article How to set limits for services in RHEL 7 and systemd . These limits need to be set in the service's unit file. Note that systemd ignores limits set in the /etc/security/limits.conf and /etc/security/limits.d/*.conf configuration files. The limits defined in these files are set by PAM when starting a login session, but daemons started by systemd do not use PAM login sessions. 10.6.3. Converting SysV Init Scripts to Unit Files Before taking time to convert a SysV init script to a unit file, make sure that the conversion was not already done elsewhere. All core services installed on Red Hat Enterprise Linux 7 come with default unit files, and the same applies for many third-party software packages. Converting an init script to a unit file requires analyzing the script and extracting the necessary information from it. Based on this data you can create a unit file as described in Section 10.6.2, "Creating Custom Unit Files" . As init scripts can vary greatly depending on the type of the service, you might need to employ more configuration options for translation than outlined in this chapter. Note that some levels of customization that were available with init scripts are no longer supported by systemd units, see Section 10.1.2, "Compatibility Changes" . The majority of information needed for conversion is provided in the script's header. The following example shows the opening section of the init script used to start the postfix service on Red Hat Enterprise Linux 6: In the above example, only lines starting with # chkconfig and # description are mandatory, so you might not find the rest in different init files. The text enclosed between the # BEGIN INIT INFO and # END INIT INFO lines is called Linux Standard Base (LSB) header . If specified, LSB headers contain directives defining the service description, dependencies, and default runlevels. What follows is an overview of analytic tasks aiming to collect the data needed for a new unit file. The postfix init script is used as an example, see the resulting postfix unit file in Example 10.17, "postfix.service Unit File" . Finding the Service Description Find descriptive information about the script on the line starting with #description . Use this description together with the service name in the Description option in the [Unit] section of the unit file. The LSB header might contain similar data on the #Short-Description and #Description lines. Finding Service Dependencies The LSB header might contain several directives that form dependencies between services. Most of them are translatable to systemd unit options, see Table 10.12, "Dependency Options from the LSB Header" Table 10.12. Dependency Options from the LSB Header LSB Option Description Unit File Equivalent Provides Specifies the boot facility name of the service, that can be referenced in other init scripts (with the "USD" prefix). This is no longer needed as unit files refer to other units by their file names. - Required-Start Contains boot facility names of required services. This is translated as an ordering dependency, boot facility names are replaced with unit file names of corresponding services or targets they belong to. For example, in case of postfix , the Required-Start dependency on USDnetwork was translated to the After dependency on network.target. After , Before Should-Start Constitutes weaker dependencies than Required-Start. Failed Should-Start dependencies do not affect the service startup. After , Before Required-Stop , Should-Stop Constitute negative dependencies. Conflicts Finding Default Targets of the Service The line starting with #chkconfig contains three numerical values. The most important is the first number that represents the default runlevels in which the service is started. Use Table 10.6, "Comparison of SysV Runlevels with systemd Targets" to map these runlevels to equivalent systemd targets. Then list these targets in the WantedBy option in the [Install] section of the unit file. For example, postfix was previously started in runlevels 2, 3, 4, and 5, which translates to multi-user.target and graphical.target on Red Hat Enterprise Linux 7. Note that the graphical.target depends on multiuser.target, therefore it is not necessary to specify both, as in Example 10.17, "postfix.service Unit File" . You might find information on default and forbidden runlevels also at #Default-Start and #Default-Stop lines in the LSB header. The other two values specified on the #chkconfig line represent startup and shutdown priorities of the init script. These values are interpreted by systemd if it loads the init script, but there is no unit file equivalent. Finding Files Used by the Service Init scripts require loading a function library from a dedicated directory and allow importing configuration, environment, and PID files. Environment variables are specified on the line starting with #config in the init script header, which translates to the EnvironmentFile unit file option. The PID file specified on the #pidfile init script line is imported to the unit file with the PIDFile option. The key information that is not included in the init script header is the path to the service executable, and potentially some other files required by the service. In versions of Red Hat Enterprise Linux, init scripts used a Bash case statement to define the behavior of the service on default actions, such as start , stop , or restart , as well as custom-defined actions. The following excerpt from the postfix init script shows the block of code to be executed at service start. The extensibility of the init script allowed specifying two custom functions, conf_check() and make_aliasesdb() , that are called from the start() function block. On closer look, several external files and directories are mentioned in the above code: the main service executable /usr/sbin/postfix , the /etc/postfix/ and /var/spool/postfix/ configuration directories, as well as the /usr/sbin/postconf/ directory. Systemd supports only the predefined actions, but enables executing custom executables with ExecStart , ExecStartPre , ExecStartPost , ExecStop , and ExecReload options. In case of postfix on Red Hat Enterprise Linux 7, the /usr/sbin/postfix together with supporting scripts are executed on service start. Consult the postfix unit file at Example 10.17, "postfix.service Unit File" . Converting complex init scripts requires understanding the purpose of every statement in the script. Some of the statements are specific to the operating system version, therefore you do not need to translate them. On the other hand, some adjustments might be needed in the new environment, both in unit file as well as in the service executable and supporting files. 10.6.4. Modifying Existing Unit Files Services installed on the system come with default unit files that are stored in the /usr/lib/systemd/system/ directory. System Administrators should not modify these files directly, therefore any customization must be confined to configuration files in the /etc/systemd/system/ directory. Depending on the extent of the required changes, pick one of the following approaches: Create a directory for supplementary configuration files at /etc/systemd/system/ unit .d/ . This method is recommended for most use cases. It enables extending the default configuration with additional functionality, while still referring to the original unit file. Changes to the default unit introduced with a package upgrade are therefore applied automatically. See the section called "Extending the Default Unit Configuration" for more information. Create a copy of the original unit file /usr/lib/systemd/system/ in /etc/systemd/system/ and make changes there. The copy overrides the original file, therefore changes introduced with the package update are not applied. This method is useful for making significant unit changes that should persist regardless of package updates. See the section called "Overriding the Default Unit Configuration" for details. In order to return to the default configuration of the unit, just delete custom-created configuration files in /etc/systemd/system/ . To apply changes to unit files without rebooting the system, execute: The daemon-reload option reloads all unit files and recreates the entire dependency tree, which is needed to immediately apply any change to a unit file. As an alternative, you can achieve the same result with the following command: Also, if the modified unit file belongs to a running service, this service must be restarted to accept new settings: Important To modify properties, such as dependencies or timeouts, of a service that is handled by a SysV initscript, do not modify the initscript itself. Instead, create a systemd drop-in configuration file for the service as described in the section called "Extending the Default Unit Configuration" and the section called "Overriding the Default Unit Configuration" . Then manage this service in the same way as a normal systemd service. For example, to extend the configuration of the network service, do not modify the /etc/rc.d/init.d/network initscript file. Instead, create new directory /etc/systemd/system/network.service.d/ and a systemd drop-in file /etc/systemd/system/network.service.d/ my_config .conf . Then, put the modified values into the drop-in file. Note: systemd knows the network service as network.service , which is why the created directory must be called network.service.d Extending the Default Unit Configuration To extend the default unit file with additional configuration options, first create a configuration directory in /etc/systemd/system/ . If extending a service unit, execute the following command as root : Replace name with the name of the service you want to extend. The above syntax applies to all unit types. Create a configuration file in the directory made in the step. Note that the file name must end with the .conf suffix. Type: Replace config_name with the name of the configuration file. This file adheres to the normal unit file structure, therefore all directives must be specified under appropriate sections, see Section 10.6.1, "Understanding the Unit File Structure" . For example, to add a custom dependency, create a configuration file with the following content: Where new_dependency stands for the unit to be marked as a dependency. Another example is a configuration file that restarts the service after its main process exited, with a delay of 30 seconds: It is recommended to create small configuration files focused only on one task. Such files can be easily moved or linked to configuration directories of other services. To apply changes made to the unit, execute as root : Example 10.20. Extending the httpd.service Configuration To modify the httpd.service unit so that a custom shell script is automatically executed when starting the Apache service, perform the following steps. First, create a directory and a custom configuration file: Provided that the script you want to start automatically with Apache is located at /usr/local/bin/custom.sh , insert the following text to the custom_script.conf file: To apply the unit changes, execute: Note The configuration files from configuration directories in /etc/systemd/system/ take precedence over unit files in /usr/lib/systemd/system/ . Therefore, if the configuration files contain an option that can be specified only once, such as Description or ExecStart , the default value of this option is overridden. Note that in the output of the systemd-delta command, described in the section called "Monitoring Overriden Units" , such units are always marked as [EXTENDED], even though in sum, certain options are actually overridden. Overriding the Default Unit Configuration To make changes that will persist after updating the package that provides the unit file, first copy the file to the /etc/systemd/system/ directory. To do so, execute the following command as root : Where name stands for the name of the service unit you wish to modify. The above syntax applies to all unit types. Open the copied file with a text editor, and make the desired changes. To apply the unit changes, execute as root : Example 10.21. Changing the timeout limit You can specify a timeout value per service to prevent a malfunctioning service from freezing the system. Otherwise, timeout is set by default to 90 seconds for normal services and to 300 seconds for SysV-compatible services. For example, to extend timeout limit for the httpd service: Copy the httpd unit file to the /etc/systemd/system/ directory: Open file /etc/systemd/system/httpd.service and specify the TimeoutStartSec value in the [Service] section: Reload the systemd daemon: Optional. Verify the new timeout value: Note To change the timeout limit globally, input the DefaultTimeoutStartSec in the /etc/systemd/system.conf file. See Section 10.1, "Introduction to systemd" . Monitoring Overriden Units To display an overview of overridden or modified unit files, use the following command: For example, the output of the above command can look as follows: Table 10.13, "systemd-delta Difference Types" lists override types that can appear in the output of systemd-delta . Note that if a file is overridden, systemd-delta by default displays a summary of changes similar to the output of the diff command. Table 10.13. systemd-delta Difference Types Type Description [MASKED] Masked unit files, see Section 10.2.7, "Disabling a Service" for description of unit masking. [EQUIVALENT] Unmodified copies that override the original files but do not differ in content, typically symbolic links. [REDIRECTED] Files that are redirected to another file. [OVERRIDEN] Overridden and changed files. [EXTENDED] Files that are extended with .conf files in the /etc/systemd/system/ unit .d/ directory. [UNCHANGED] Unmodified files are displayed only when the --type=unchanged option is used. It is good practice to run systemd-delta after system update to check if there are any updates to the default units that are currently overridden by custom configuration. It is also possible to limit the output only to a certain difference type. For example, to view just the overridden units, execute: 10.6.5. Working with Instantiated Units It is possible to instantiate multiple units from a single template configuration file at runtime. The "@" character is used to mark the template and to associate units with it. Instantiated units can be started from another unit file (using Requires or Wants options), or with the systemctl start command. Instantiated service units are named the following way: Where template_name stands for the name of the template configuration file. Replace instance_name with the name for the unit instance. Several instances can point to the same template file with configuration options common for all instances of the unit. Template unit name has the form of: For example, the following Wants setting in a unit file: first makes systemd search for given service units. If no such units are found, the part between "@" and the type suffix is ignored and systemd searches for the [email protected] file, reads the configuration from it, and starts the services. Wildcard characters, called unit specifiers , can be used in any unit configuration file. Unit specifiers substitute certain unit parameters and are interpreted at runtime. Table 10.14, "Important Unit Specifiers" lists unit specifiers that are particularly useful for template units. Table 10.14. Important Unit Specifiers Unit Specifier Meaning Description %n Full unit name Stands for the full unit name including the type suffix. %N has the same meaning but also replaces the forbidden characters with ASCII codes. %p Prefix name Stands for a unit name with type suffix removed. For instantiated units %p stands for the part of the unit name before the "@" character. %i Instance name Is the part of the instantiated unit name between the "@" character and the type suffix. %I has the same meaning but also replaces the forbidden characters for ASCII codes. %H Host name Stands for the hostname of the running system at the point in time the unit configuration is loaded. %t Runtime directory Represents the runtime directory, which is either /run for the root user, or the value of the XDG_RUNTIME_DIR variable for unprivileged users. For a complete list of unit specifiers, see the systemd.unit(5) manual page. For example, the [email protected] template contains the following directives: When the [email protected] and [email protected] are instantiated form the above template, Description = is resolved as Getty on ttyA and Getty on ttyB . 10.7. Additional Considerations While Managing Services During normal operation, systemd maintains an association between a unit abstraction and the underlying processes active on the system. From: man systemd The cgroup hierarchy is critical to systemd's view of process and service health. When a process forks itself, it inherits the cgroup of the creating process. With this being the case, all processes associated with a given unit can be verified by reading the contents of the applicable cgroup.procs file, such as: The output matches the CGroup information returned during a systemctl status unit operation: To directly view these groupings of processes system-wide, the systemd-cgls utility can be used: In order for systemd to function properly, the service must be started or stopped through the systemd system to maintain the correct process to unit grouping. Any operation that takes external action results in the necessary cgroup structure not being created. This happens because systemd is not aware of the special nature of the processes being started. As an example of the above constraint, stopping the httpd service and then issuing /usr/sbin/httpd directly results in the following: Note that the httpd process is now visible under the user-0.slice and a session-168.scope. This service is treated as a user started process, as opposed to a system service, that systemd should monitor and manage directly. Some failures that can occur due to this misalignment include, but are not limited to: Services are not properly shutdown during system shutdown or restart events. Unexpected signals are delivered during user logout such as SIGHUP and SIGTERM. Processes that fail are not automatically restarted despite having a Restart= directive Note Non-graceful application shutdown events can result in a large number of subsequent application failures, such as client-side failures, data loss, and on-disk corruption. 10.8. Additional Resources For more information on systemd and its usage on Red Hat Enterprise Linux 7, see the resources listed below. Installed Documentation systemctl (1) - The manual page for the systemctl command line utility provides a complete list of supported options and commands. systemd (1) - The manual page for the systemd system and service manager provides more information about its concepts and documents available command line options and environment variables, supported configuration files and directories, recognized signals, and available kernel options. systemd-delta (1) - The manual page for the systemd-delta utility that allows to find extended and overridden configuration files. systemd.unit (5) - The manual page named systemd.unit provides detailed information about systemd unit files and documents all available configuration options. systemd.service (5) - The manual page named systemd.service documents the format of service unit files. systemd.target (5) - The manual page named systemd.target documents the format of target unit files. systemd.kill (5) - The manual page named systemd.kill documents the configuration of the process killing procedure. Online Documentation Red Hat Enterprise Linux 7 Networking Guide - The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces, networks, and network services in this system. It provides an introduction to the hostnamectl utility, explains how to use it to view and set host names on the command line, both locally and remotely, and provides important information about the selection of host names and domain names. Red Hat Enterprise Linux 7 Desktop Migration and Administration Guide - The Desktop Migration and Administration Guide for Red Hat Enterprise Linux 7 documents the migration planning, deployment, configuration, and administration of the GNOME 3 desktop on this system. It introduces the logind service, enumerates its most significant features, and explains how to use the loginctl utility to list active sessions and enable multi-seat support. Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services such as the Apache HTTP Server, Postfix, PostgreSQL, or OpenShift. It explains how to configure SELinux access permissions for system services managed by systemd. Red Hat Enterprise Linux 7 Installation Guide - The Installation Guide for Red Hat Enterprise Linux 7 documents how to install the system on AMD64 and Intel 64 systems, 64-bit IBM Power Systems servers, and IBM Z. It also covers advanced installation methods such as Kickstart installations, PXE installations, and installations over the VNC protocol. In addition, it describes common post-installation tasks and explains how to troubleshoot installation problems, including detailed instructions on how to boot into rescue mode or recover the root password. Red Hat Enterprise Linux 7 Security Guide - The Security Guide for Red Hat Enterprise Linux 7 assists users and administrators in learning the processes and practices of securing their workstations and servers against local and remote intrusion, exploitation, and malicious activity. It also explains how to secure critical system services. systemd Home Page - The project home page provides more information about systemd. See Also Chapter 2, System Locale and Keyboard Configuration documents how to manage the system locale and keyboard layouts. It explains how to use the localectl utility to view the current locale, list available locales, and set the system locale on the command line, as well as to view the current keyboard layout, list available keymaps, and enable a particular keyboard layout on the command line. Chapter 3, Configuring the Date and Time documents how to manage the system date and time. It explains the difference between a real-time clock and system clock and describes how to use the timedatectl utility to display the current settings of the system clock, configure the date and time, change the time zone, and synchronize the system clock with a remote server. Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Chapter 12, OpenSSH describes how to configure an SSH server and how to use the ssh , scp , and sftp client utilities to access it. Chapter 23, Viewing and Managing Log Files provides an introduction to journald . It describes the journal, introduces the journald service, and documents how to use the journalctl utility to view log entries, enter live view mode, and filter log entries. In addition, this chapter describes how to give non-root users access to system logs and enable persistent storage for log files.
[ "DefaultTimeoutStartSec= required value", "~]# systemctl stop nfs-server.service", "~]# systemctl stop nfs-server", "~]# systemctl show nfs-server.service -p Names", "~]# chroot /srv/website1 ~]# systemctl enable httpd.service Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service, pointing to /usr/lib/systemd/system/httpd.service.", "systemctl list-units --type service", "systemctl list-units --type service --all", "systemctl list-unit-files --type service", "~]USD systemctl list-units --type service UNIT LOAD ACTIVE SUB DESCRIPTION abrt-ccpp.service loaded active exited Install ABRT coredump hook abrt-oops.service loaded active running ABRT kernel log watcher abrt-vmcore.service loaded active exited Harvest vmcores for ABRT abrt-xorg.service loaded active running ABRT Xorg log watcher abrtd.service loaded active running ABRT Automated Bug Reporting Tool systemd-vconsole-setup.service loaded active exited Setup Virtual Console tog-pegasus.service loaded active running OpenPegasus CIM Server LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 46 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'", "~]USD systemctl list-unit-files --type service UNIT FILE STATE abrt-ccpp.service enabled abrt-oops.service enabled abrt-vmcore.service enabled abrt-xorg.service enabled abrtd.service enabled wpa_supplicant.service disabled ypbind.service disabled 208 unit files listed.", "systemctl status name .service", "systemctl is-active name .service", "systemctl is-enabled name .service", "~]# systemctl status gdm.service gdm.service - GNOME Display Manager Loaded: loaded (/usr/lib/systemd/system/gdm.service; enabled) Active: active (running) since Thu 2013-10-17 17:31:23 CEST; 5min ago Main PID: 1029 (gdm) CGroup: /system.slice/gdm.service ├─1029 /usr/sbin/gdm ├─1037 /usr/libexec/gdm-simple-slave --display-id /org/gno └─1047 /usr/bin/Xorg :0 -background none -verbose -auth /r Oct 17 17:31:23 localhost systemd[1]: Started GNOME Display Manager.", "~]# systemctl list-dependencies --after gdm.service gdm.service ├─dbus.socket ├─[email protected] ├─livesys.service ├─plymouth-quit.service ├─system.slice ├─systemd-journald.socket ├─systemd-user-sessions.service └─basic.target [output truncated]", "~]# systemctl list-dependencies --before gdm.service gdm.service ├─dracut-shutdown.service ├─graphical.target │ ├─systemd-readahead-done.service │ ├─systemd-readahead-done.timer │ └─systemd-update-utmp-runlevel.service └─shutdown.target ├─systemd-reboot.service └─final.target └─systemd-reboot.service", "systemctl start name .service", "~]# systemctl start httpd.service", "systemctl stop name .service", "~]# systemctl stop bluetooth.service", "systemctl restart name .service", "systemctl try-restart name .service", "systemctl reload name .service", "~]# systemctl reload httpd.service", "systemctl enable name .service", "systemctl reenable name .service", "~]# systemctl enable httpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.", "systemctl disable name .service", "systemctl mask name .service", "systemctl unmask name .service", "~]# systemctl disable bluetooth.service Removed symlink /etc/systemd/system/bluetooth.target.wants/bluetooth.service. Removed symlink /etc/systemd/system/dbus-org.bluez.service.", "systemctl get-default", "~]USD systemctl get-default graphical.target", "systemctl list-units --type target", "systemctl list-units --type target --all", "~]USD systemctl list-units --type target UNIT LOAD ACTIVE SUB DESCRIPTION basic.target loaded active active Basic System cryptsetup.target loaded active active Encrypted Volumes getty.target loaded active active Login Prompts graphical.target loaded active active Graphical Interface local-fs-pre.target loaded active active Local File Systems (Pre) local-fs.target loaded active active Local File Systems multi-user.target loaded active active Multi-User System network.target loaded active active Network paths.target loaded active active Paths remote-fs.target loaded active active Remote File Systems sockets.target loaded active active Sockets sound.target loaded active active Sound Card spice-vdagentd.target loaded active active Agent daemon for Spice guests swap.target loaded active active Swap sysinit.target loaded active active System Initialization time-sync.target loaded active active System Time Synchronized timers.target loaded active active Timers LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 17 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'.", "systemctl set-default name .target", "~]# systemctl set-default multi-user.target rm '/etc/systemd/system/default.target' ln -s '/usr/lib/systemd/system/multi-user.target' '/etc/systemd/system/default.target'", "systemctl isolate name .target", "~]# systemctl isolate multi-user.target", "systemctl rescue", "systemctl --no-wall rescue", "~]# systemctl rescue Broadcast message from root@localhost on pts/0 (Fri 2013-10-25 18:23:15 CEST): The system is going down to rescue mode NOW!", "systemctl emergency", "systemctl --no-wall emergency", "~]# systemctl --no-wall emergency", "systemctl poweroff", "systemctl halt", "systemctl --no-wall poweroff", "shutdown --poweroff hh:mm", "shutdown --halt +m", "shutdown -c", "systemctl reboot", "systemctl --no-wall reboot", "systemctl suspend", "systemctl hibernate", "systemctl hybrid-sleep", "systemctl --host user_name@host_name command", "~]USD systemctl -H [email protected] status httpd.service >>>>>>> systemd unit files -- update [email protected]'s password: httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: active (running) since Fri 2013-11-01 13:58:56 CET; 2h 48min ago Main PID: 649 Status: \"Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec\" CGroup: /system.slice/httpd.service", "unit_name . type_extension", "[Unit] Description=Postfix Mail Transport Agent After=syslog.target network.target Conflicts=sendmail.service exim.service [Service] Type=forking PIDFile=/var/spool/postfix/pid/master.pid EnvironmentFile=-/etc/sysconfig/network ExecStartPre=-/usr/libexec/postfix/aliasesdb ExecStartPre=-/usr/libexec/postfix/chroot-update ExecStart=/usr/sbin/postfix start ExecReload=/usr/sbin/postfix reload ExecStop=/usr/sbin/postfix stop [Install] WantedBy=multi-user.target", "touch /etc/systemd/system/ name .service chmod 664 /etc/systemd/system/ name .service", "[Unit] Description= service_description After=network.target [Service] ExecStart= path_to_executable Type=forking PIDFile= path_to_pidfile [Install] WantedBy=default.target", "systemctl daemon-reload systemctl start name .service", "~]# touch /etc/systemd/system/emacs.service ~]# chmod 664 /etc/systemd/system/emacs.service", "[Unit] Description=Emacs: the extensible, self-documenting text editor [Service] Type=forking ExecStart=/usr/bin/emacs --daemon ExecStop=/usr/bin/emacsclient --eval \"(kill-emacs)\" Environment=SSH_AUTH_SOCK=%t/keyring/ssh Restart=always [Install] WantedBy=default.target", "~]# systemctl daemon-reload ~]# systemctl start emacs.service", "~]# cp /etc/ssh/sshd{,-second}_config", "Port 22220 PidFile /var/run/sshd-second.pid", "~]# cp /usr/lib/systemd/system/sshd.service /etc/systemd/system/sshd-second.service", "Description=OpenSSH server second instance daemon", "After=syslog.target network.target auditd.service sshd.service", "ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config USDOPTIONS", "[Unit] Description=OpenSSH server second instance daemon After=syslog.target network.target auditd.service sshd.service [Service] EnvironmentFile=/etc/sysconfig/sshd ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config USDOPTIONS ExecReload=/bin/kill -HUP USDMAINPID KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target", "~]# semanage port -a -t ssh_port_t -p tcp 22220", "~]# systemctl enable sshd-second.service", "~]USD ssh -p 22220 user@server", "#!/bin/bash # postfix Postfix Mail Transfer Agent # chkconfig: 2345 80 30 description: Postfix is a Mail Transport Agent, which is the program that moves mail from one machine to another. processname: master pidfile: /var/spool/postfix/pid/master.pid config: /etc/postfix/main.cf config: /etc/postfix/master.cf ### BEGIN INIT INFO Provides: postfix MTA Required-Start: USDlocal_fs USDnetwork USDremote_fs Required-Stop: USDlocal_fs USDnetwork USDremote_fs Default-Start: 2 3 4 5 Default-Stop: 0 1 6 Short-Description: start and stop postfix Description: Postfix is a Mail Transport Agent, which is the program that moves mail from one machine to another. ### END INIT INFO", "conf_check() { [ -x /usr/sbin/postfix ] || exit 5 [ -d /etc/postfix ] || exit 6 [ -d /var/spool/postfix ] || exit 5 } make_aliasesdb() { if [ \"USD(/usr/sbin/postconf -h alias_database)\" == \"hash:/etc/aliases\" ] then # /etc/aliases.db might be used by other MTA, make sure nothing # has touched it since our last newaliases call [ /etc/aliases -nt /etc/aliases.db ] || [ \"USDALIASESDB_STAMP\" -nt /etc/aliases.db ] || [ \"USDALIASESDB_STAMP\" -ot /etc/aliases.db ] || return /usr/bin/newaliases touch -r /etc/aliases.db \"USDALIASESDB_STAMP\" else /usr/bin/newaliases fi } start() { [ \"USDEUID\" != \"0\" ] && exit 4 # Check that networking is up. [ USD{NETWORKING} = \"no\" ] && exit 1 conf_check # Start daemons. echo -n USD\"Starting postfix: \" make_aliasesdb >/dev/null 2>&1 [ -x USDCHROOT_UPDATE ] && USDCHROOT_UPDATE /usr/sbin/postfix start 2>/dev/null 1>&2 && success || failure USD\"USDprog start\" RETVAL=USD? [ USDRETVAL -eq 0 ] && touch USDlockfile echo return USDRETVAL }", "systemctl daemon-reload", "init q", "systemctl restart name .service", "mkdir /etc/systemd/system/ name .service.d/", "touch /etc/systemd/system/name.service.d/config_name.conf", "[Unit] Requires= new_dependency After= new_dependency", "[Service] Restart=always RestartSec=30", "systemctl daemon-reload systemctl restart name .service", "~]# mkdir /etc/systemd/system/httpd.service.d/ ~]# touch /etc/systemd/system/httpd.service.d/custom_script.conf", "[Service] ExecStartPost=/usr/local/bin/custom.sh", "~]# systemctl daemon-reload ~]# systemctl restart httpd.service", "cp /usr/lib/systemd/system/ name .service /etc/systemd/system/ name .service", "systemctl daemon-reload systemctl restart name .service", "cp /usr/lib/systemd/system/httpd.service /etc/systemd/system/httpd.service", "[Service] PrivateTmp=true TimeoutStartSec=10 [Install] WantedBy=multi-user.target", "systemctl daemon-reload", "systemctl show httpd -p TimeoutStartUSec", "systemd-delta", "[EQUIVALENT] /etc/systemd/system/default.target /usr/lib/systemd/system/default.target [OVERRIDDEN] /etc/systemd/system/autofs.service /usr/lib/systemd/system/autofs.service --- /usr/lib/systemd/system/autofs.service 2014-10-16 21:30:39.000000000 -0400 +++ /etc/systemd/system/autofs.service 2014-11-21 10:00:58.513568275 -0500 @@ -8,7 +8,8 @@ EnvironmentFile=-/etc/sysconfig/autofs ExecStart=/usr/sbin/automount USDOPTIONS --pid-file /run/autofs.pid ExecReload=/usr/bin/kill -HUP USDMAINPID -TimeoutSec=180 +TimeoutSec=240 +Restart=Always [Install] WantedBy=multi-user.target [MASKED] /etc/systemd/system/cups.service /usr/lib/systemd/system/cups.service [EXTENDED] /usr/lib/systemd/system/sssd.service /etc/systemd/system/sssd.service.d/journal.conf 4 overridden configuration files found.", "systemd-delta --type=overridden", "template_name @ instance_name .service", "unit_name @.service", "[email protected],[email protected]", "[Unit] Description=Getty on %I [Service] ExecStart=-/sbin/agetty --noclear %I USDTERM", "Processes systemd spawns are placed in individual Linux control groups named after the unit which they belong to in the private systemd hierarchy. (see cgroups.txt[1] for more information about control groups, or short \"cgroups\"). systemd uses this to effectively keep track of processes. Control group information is maintained in the kernel, and is accessible via the file system hierarchy (beneath /sys/fs/cgroup/systemd/), or in tools such as ps(1) (ps xawf -eo pid,user,cgroup,args is particularly useful to list all processes and the systemd units they belong to).", "~]# cat /sys/fs/cgroup/systemd/system.slice/httpd.service/cgroup.procs 11854 11855 11856 11857 11858 11859", "~]# systemctl status httpd ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2019-05-29 12:08:16 EDT; 45s ago Docs: man:httpd(8) man:apachectl(8) Main PID: 11854 (httpd) Status: \"Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec\" CGroup: /system.slice/httpd.service ├─11854 /usr/sbin/httpd -DFOREGROUND ├─11855 /usr/sbin/httpd -DFOREGROUND ├─11856 /usr/sbin/httpd -DFOREGROUND ├─11857 /usr/sbin/httpd -DFOREGROUND ├─11858 /usr/sbin/httpd -DFOREGROUND └─11859 /usr/sbin/httpd -DFOREGROUND May 29 12:08:16 localhost systemd[1]: Starting The Apache HTTP Server May 29 12:08:16 localhost systemd[1]: Started The Apache HTTP Server.", "~]# systemd-cgls | head -17 ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22 ├─user.slice │ └─user-0.slice │ └─session-168.scope │ ├─ 3049 login -- root │ ├─11884 -bash │ ├─11943 systemd-cgls │ └─11944 head -17 └─system.slice ├─httpd.service │ ├─11854 /usr/sbin/httpd -DFOREGROUND │ ├─11855 /usr/sbin/httpd -DFOREGROUND │ ├─11856 /usr/sbin/httpd -DFOREGROUND │ ├─11857 /usr/sbin/httpd -DFOREGROUND │ ├─11858 /usr/sbin/httpd -DFOREGROUND │ └─11859 /usr/sbin/httpd -DFOREGROUND ├─rhnsd.service", "~]# systemctl stop httpd ~]# /usr/sbin/httpd systemd-cgls | head -17 ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22 ├─user.slice │ └─user-0.slice │ └─session-168.scope │ ├─ 3049 login -- root │ ├─11884 -bash │ ├─11957 /usr/sbin/httpd │ ├─11958 /usr/sbin/httpd │ ├─11959 /usr/sbin/httpd │ ├─11960 /usr/sbin/httpd │ ├─11961 /usr/sbin/httpd │ ├─11962 /usr/sbin/httpd │ ├─11963 systemd-cgls │ └─11964 head -17 └─system.slice ├─rhnsd.service │ └─3261 rhnsd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/chap-Managing_Services_with_systemd
8.223. udev
8.223. udev 8.223.1. RHBA-2013:1675 - udev bug fix and enhancement update Updated udev packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The udev packages implement a dynamic device-directory, providing only the devices present on the system. This dynamic directory runs in user space, dynamically creates and removes devices, provides consistent naming, and a user-space API. The udev packages replace the devfs package and provides better hot plug functionality. Bug Fixes BZ# 833172 , BZ# 885978 , BZ# 918511 Previously, for machines with relatively big RAM sizes and lots of disks, a number of udevd workers were running in parallel, maximizing CPU and I/O. This could cause udev events to time out due to hardware bottlenecks. With this update, the number of udevd workers is limited by the CPU count and significantly lower on machines with a big RAM size. Now, fewer udev workers running concurrently do not bottleneck easily and cause less or no timeouts. BZ# 888647 Previously, the udev utility did not provide a symbolic link to SCM (Storage Class Memory) devices in the /dev/disk/by-path/ directory, which prevented SCM devices to be referenced by their paths. With this update, the path_id built-in command supports SCM devices and provides a symbolic link. Now, SCM devices can be referenced by their paths. BZ# 909792 Prior to this update, the libudev.h header file did not have any extern "C" declaration, so it could not be used as-is in a C++ programs or applications. An extern "C" declaration has been added to the header file, thus fixing the bug. BZ# 918511 Previously, the start_udev command called the "udevadm settle [options]" command and timed out after the default of 180 seconds. Nevertheless, some devices were not completely assembled and the boot process continued causing various failures. With this update, start_udev waits until udev has settled. As a result, all devices are assembled, and the boot process now continues without errors. BZ# 920961 If a SCSI device was in use at the time the udev scsi_id helper utility was invoked, scsi_id did not return any properties of the device. Consequently, the properties of the SCSI device could not be processed in udev rules. With this update, scsi_id retries to open the device for a certain time span before it gives up. As a result, the properties of a SCSI device can be processed in udev rules, even though the device is in use for a short time. BZ# 982902 For USB devices with InterfaceClass=0x08 and InterfaceSubClass=0x05, udev set the ID type as "floppy", which was not necessarily true. As a consequence, some tools could interpret the USB device as a floppy disk. Now, the ID type is set as "generic" for such USB devices, and tools interpret the USB devices correctly. BZ# 998237 Previously, the libudev utility was referencing memory, which had been reallocated with its old address into the dev_enumerate_get_list_entry() function. However, calling this function could lead to a segmentation fault. With this update, libudev references the reallocated memory with offsets in udev_enumerate_get_list_entry(), thus fixing the bug. Enhancement BZ# 947067 Previously, the amount of debug output could not be controlled and often exceeded the available memory, if stored in the /dev/ temporary file. With this update, the start_udev command with udevlog now call the udevd daemon with the "-s" option, which redirects the output of udevd to the /dev/.udev/udev.log file but does not set udevd in the debug mode. In addition, udevd now understands the log priorities set in the rules file (OPTIONS+="log_priority=<level>"), so the user can set the numerical syslog priorities or their textual representations. There is also a new example rules file for logging: /lib/udev/rules.d/01-log-block.rules. To enable "info" logging for block devices, add "rd.log.block=info" to the kernel command line. Users of udev are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/udev
Preface
Preface The Package manifest document provides a package listing for Red Hat Enterprise Linux 7.9. Capabilities and limits of RHEL 7 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the RHEL life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. Detailed changes in each minor release of RHEL are documented in the Release notes .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/package_manifest/preface
Chapter 1. Overview of multiple storage cluster deployments
Chapter 1. Overview of multiple storage cluster deployments Red Hat OpenShift Data Foundation provides the ability to deploy two storage clusters, where one is internal mode and the other is in external mode. This can be achieved only with the first cluster installed as internal in openshift-storage namespace while the second cluster is installed as the external in openshift-storage-extended namespace. Clusters installed conversely is not currently supported. Supported platforms Bare metal VMware VSphere OpenStack OpenShift Virtualization IBM Cloud IBM Power
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_multiple_openshift_data_foundation_storage_clusters/overview-multistoragecluster_rhodf
2.9. numastat
2.9. numastat The numastat tool displays memory statistics for processes and the operating system on a per-NUMA-node basis. By default, numastat displays per-node NUMA hit an miss system statistics from the kernel memory allocator. Optimal performance is indicated by high numa_hit values and low numa_miss values. Numastat also provides a number of command line options, which can show how system and process memory is distributed across NUMA nodes in the system. It can be useful to cross-reference per-node numastat output with per-CPU top output to verify that process threads are running on the same node to which memory is allocated. Numastat is provided by the numactl package. For details about how to use numastat, see Section A.11, "numastat" . For further information about numastat, see the man page:
[ "man numastat" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-numastat
7.277. xinetd
7.277. xinetd 7.277.1. RHSA-2013:0499 - Low: xinetd security and bug fix update An updated xinetd package that fixes one security issue and two bugs is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The xinetd package provides a secure replacement for inetd, the Internet services daemon. xinetd provides access control for all services based on the address of the remote host and/or on time of access, and can prevent denial-of-access attacks. Security Fix CVE-2012-0862 When xinetd services are configured with the "TCPMUX" or "TCPMUXPLUS" type, and the tcpmux-server service is enabled, those services are accessible via port 1. It was found that enabling the tcpmux-server service (it is disabled by default) allowed every xinetd service, including those that are not configured with the "TCPMUX" or "TCPMUXPLUS" type, to be accessible via port 1. This could allow a remote attacker to bypass intended firewall restrictions. Red Hat would like to thank Thomas Swan of FedEx for reporting this issue. Bug Fixes BZ# 790036 Prior to this update, a file descriptor array in the service.c source file was not handled as expected. As a consequence, some of the descriptors remained open when xinetd was under heavy load. Additionally, the system log was filled with a large number of messages that took up a lot of disk space over time. This update modifies the xinetd code to handle the file descriptors correctly and messages no longer fill the system log. BZ#809271 Prior to this update, services were disabled permanently when their CPS limit was reached. As a consequence, a failed bind operation could occur when xinetd attempted to restart the service. This update adds additional logic that attempts to restart the service. Now, the service is only disabled if xinetd cannot restart the service after 30 attempts. All users of xinetd are advised to upgrade to this updated package, which contains backported patches to correct these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/xinetd
7.230. wireshark
7.230. wireshark 7.230.1. RHSA-2015:1460 - Moderate: wireshark security, bug fix, and enhancement update Updated wireshark packages that fix multiple security issues, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. Wireshark, previously known as Ethereal, is a network protocol analyzer, which is used to capture and browse the traffic running on a computer network. Security Fix CVE-2014-8714 , CVE-2014-8712 , CVE-2014-8713 , CVE-2014-8711 , CVE-2014-8710 , CVE-2015-0562 , CVE-2015-0564 , CVE-2015-2189 , CVE-2015-2191 Several denial of service flaws were found in Wireshark. Wireshark could crash or stop responding if it read a malformed packet off a network, or opened a malicious dump file. Bug Fixes BZ# 1095065 Previously, the Wireshark tool did not support Advanced Encryption Standard Galois/Counter Mode (AES-GCM) cryptographic algorithm. As a consequence, AES-GCM was not decrypted. Support for AES-GCM has been added to Wireshark, and AES-GCM is now correctly decrypted. BZ# 1121275 Previously, when installing the system using the kickstart method, a dependency on the shadow-utils packages was missing from the wireshark packages, which could cause the installation to fail with a "bad scriptlet" error message. With this update, shadow-utils are listed as required in the wireshark packages spec file, and kickstart installation no longer fails. BZ# 1131203 Prior to this update, the Wireshark tool could not decode types of elliptic curves in Datagram Transport Layer Security (DTLS) Client Hello. Consequently, Wireshark incorrectly displayed elliptic curves types as data. A patch has been applied to address this bug, and Wireshark now decodes elliptic curves types properly. BZ# 1160388 Previously, a dependency on the gtk2 packages was missing from the wireshark packages. As a consequence, the Wireshark tool failed to start under certain circumstances due to an unresolved symbol, "gtk_combo_box_text_new_with_entry", which was added in gtk version 2.24. With this update, a dependency on gtk2 has been added, and Wireshark now always starts as expected. Enhancements BZ# 1104210 With this update, the Wireshark tool supports process substitution, which feeds the output of a process (or processes) into the standard input of another process using the "<(command_list)" syntax. When using process substitution with large files as input, Wireshark failed to decode such input. BZ# 1146578 Wireshark has been enhanced to enable capturing packets with nanosecond time stamp precision, which allows better analysis of recorded network traffic. All wireshark users are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add these enhancements. All running instances of Wireshark must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-wireshark
Chapter 5. JWS Operator deletion from a cluster
Chapter 5. JWS Operator deletion from a cluster If you no longer need to use the JWS Operator, you can subsequently delete the JWS Operator from a cluster. You can delete the JWS Operator from a cluster in either of the following ways: Use the OpenShift web console . Use the oc command-line tool . 5.1. Deleting the JWS Operator by using the web console If you want to delete the JWS Operator by using a graphical user interface, you can use the OpenShift web console to delete the JWS Operator. Prerequisites You have deployed an OpenShift Container Platform cluster by using an account with cluster admin permissions. Note If you do not have cluster admin permissions, you can circumvent this requirement. For more information, see Allowing non-cluster administrators to install Operators . Procedure Open the web console and click Operators > Installed Operators . Select the Actions menu and click Uninstall Operator . Note The Uninstall Operator option automatically removes the Operator, any Operator deployments, and Pods. Deleting the Operator does not remove any custom resource definitions or custom resources for the Operator, including CRDs or CRs. If the Operator has deployed applications on the cluster, or if the Operator has configured resources outside the cluster, you must clean up these applications and resources manually. 5.2. Deleting the JWS Operator by using the command line If you want to delete the JWS Operator by using a command-line interface, you can use the oc command-line tool to delete the JWS Operator. Prerequisites You have deployed an OpenShift Container Platform cluster by using an account with cluster admin permissions. Note If you do not have cluster admin permissions, you can circumvent this requirement. For more information, see Allowing non-cluster administrators to install Operators . You have installed the oc tool on your local system. Procedure Check the current version of the subscribed Operator: In the preceding example, replace <project_name> with the namespace of the project where you installed the Operator. If your Operator was installed to all namespaces, replace <project_name> with openshift-operators . The preceding command displays the following output, where v2.0. x refers to the Operator version (for example, v2.0.6 ): Delete the subscription for the Operator: In the preceding example, replace <project_name> with the namespace of the project where you installed the Operator. If your operator was installed to all namespaces, replace <project_name> with openshift-operators . Delete the CSV for the Operator in the target namespace: In the preceding example, replace <currentCSV> with the currentCSV value that you obtained in Step 1 (for example, jws-operator.v2.0.6 ). Replace <project_name> with the namespace of the project where you installed the Operator. If your operator was installed to all namespaces, replace <project_name> with openshift-operators . The preceding command displays a message to confirm that the CSV is deleted. For example:
[ "oc get subscription jws-operator -n <project_name> -o yaml | grep currentCSV", "f:currentCSV: {} currentCSV: jws-operator.v2.0. x", "oc delete subscription jws-operator -n <project_name>", "oc delete clusterserviceversion <currentCSV> -n <project_name>", "clusterserviceversion.operators.coreos.com \"jws-operator.v2.0. x \" deleted" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_operator/con_jws-operator-deletion_jws-operator
Chapter 11. Red Hat build of Kogito service properties configuration
Chapter 11. Red Hat build of Kogito service properties configuration When a Red Hat build of Kogito microservice is deployed, a configMap resource is created for the application.properties configuration of the Red Hat build of Kogito microservice. The name of the configMap resource consists of the name of the Red Hat build of Kogito microservice and the suffix -properties , as shown in the following example: Example configMap resource generated during Red Hat build of Kogito microservice deployment kind: ConfigMap apiVersion: v1 metadata: name: kogito-travel-agency-properties data: application.properties : |- property1=value1 property2=value2 The application.properties data of the configMap resource is mounted in a volume to the container of the Red Hat build of Kogito microservice. Any runtime properties that you add to the application.properties section override the default application configuration properties of the Red Hat build of Kogito microservice. When the application.properties data of the configMap is changed, a rolling update modifies the deployment and configuration of the Red Hat build of Kogito microservice.
[ "kind: ConfigMap apiVersion: v1 metadata: name: kogito-travel-agency-properties data: application.properties : |- property1=value1 property2=value2" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/con-kogito-operator-deployment-configs_deploying-kogito-microservices-on-openshift
9.8. Setting up Resumable CRL Downloads
9.8. Setting up Resumable CRL Downloads Certificate System provides option for interrupted CRL downloads to be resumed smoothly. This is done by publishing the CRLs as a plain file over HTTP. This method of downloading CRLs gives flexibility in retrieving CRLs and lowers overall network congestion. 9.8.1. Retrieving CRLs Using wget Because CRLs can be published as a text file over HTTP, they can be manually retrieved from the CA using a tool such as wget . The wget command can be used to retrieve any published CRL. For example, to retrieve a full CRL which is newer than the full CRL: The relevant parameters for wget are summarized in Table 9.4, "wget Options to Use for Retrieving CRLs" . Table 9.4. wget Options to Use for Retrieving CRLs Argument Description no argument Retrieves the full CRL. -N Retrieves the CRL that is newer than the local copy (delta CRL). -c Retrieves a partially-downloaded file. --no-check-certificate Skips SSL for the connection, so it is not necessary to configure SSL between the host and client. -d Prints debug information.
[ "wget --no-check-certificate -d https://server.example.com:8443/ca/ee/ca/crl/MasterCRL.bin" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/crls-http
4.193. NetworkManager-openswan
4.193. NetworkManager-openswan 4.193.1. RHBA-2011:1771 - NetworkManager-openswan bug fix update An updated NetworkManager-openswan package that fixes various bug is now available for Red Hat Enterprise Linux 6. NetworkManager-openswan contains software for integrating the Openswan VPN software with NetworkManager and the GNOME desktop. Bug Fixes BZ# 684809 When an openswan VPN is established, the NetworkManager applet did not display any notification (login banner) and the error message, "Error getting 'Banner'", was logged. With this update, NetworkManager now displays the connection establishment notification as a tooltip for the NetworkManager icon. BZ# 702323 Prior to this update, networkmanager-openswan did not provide an export feature. Due to this, it was not possible to save the configuration settings in a file. This update adds this feature and now it is possible to export configuration settings to a file. BZ# 705890 Prior to this update, NetworkManager could not properly track the status of an openswan VPN. Consequently, when an openswan VPN was disconnected, NetworkManager did not remove the VPN padlock icon. This update fixes this issue and now the VPN padlock icon is removed after an openswan VPN connection is terminated. All users of NetworkManager-openswan are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/networkmanager-openswan
Chapter 3. Kafka broker configuration tuning
Chapter 3. Kafka broker configuration tuning Use configuration properties to optimize the performance of Kafka brokers. You can use standard Kafka broker configuration options, except for properties managed directly by AMQ Streams. 3.1. Basic broker configuration A typical broker configuration will include settings for properties related to topics, threads and logs. Basic broker configuration properties # ... num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000 # ... 3.2. Replicating topics for high availability Basic topic properties set the default number of partitions and replication factor for topics, which will apply to topics that are created without these properties being explicitly set, including when topics are created automatically. # ... num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576 # ... For high availability environments, it is advisable to increase the replication factor to at least 3 for topics and set the minimum number of in-sync replicas required to 1 less than the replication factor. The auto.create.topics.enable property is enabled by default so that topics that do not already exist are created automatically when needed by producers and consumers. If you are using automatic topic creation, you can set the default number of partitions for topics using num.partitions . Generally, however, this property is disabled so that more control is provided over topics through explicit topic creation. For data durability , you should also set min.insync.replicas in your topic configuration and message delivery acknowledgments using acks=all in your producer configuration. Use replica.fetch.max.bytes to set the maximum size, in bytes, of messages fetched by each follower that replicates the leader partition. Change this value according to the average message size and throughput. When considering the total memory allocation required for read/write buffering, the memory available must also be able to accommodate the maximum replicated message size when multiplied by all followers. The delete.topic.enable property is enabled by default to allow topics to be deleted. In a production environment, you should disable this property to avoid accidental topic deletion, resulting in data loss. You can, however, temporarily enable it and delete topics and then disable it again. Note When running AMQ Streams on OpenShift, the Topic Operator can provide operator-style topic management. You can use the KafkaTopic resource to create topics. For topics created using the KafkaTopic resource, the replication factor is set using spec.replicas . If delete.topic.enable is enabled, you can also delete topics using the KafkaTopic resource. # ... auto.create.topics.enable=false delete.topic.enable=true # ... 3.3. Internal topic settings for transactions and commits If you are using transactions to enable atomic writes to partitions from producers, the state of the transactions is stored in the internal __transaction_state topic. By default, the brokers are configured with a replication factor of 3 and a minimum of 2 in-sync replicas for this topic, which means that a minimum of three brokers are required in your Kafka cluster. # ... transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 # ... Similarly, the internal __consumer_offsets topic, which stores consumer state, has default settings for the number of partitions and replication factor. # ... offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 # ... Do not reduce these settings in production. You can increase the settings in a production environment. As an exception, you might want to reduce the settings in a single-broker test environment. 3.4. Improving request handling throughput by increasing I/O threads Network threads handle requests to the Kafka cluster, such as produce and fetch requests from client applications. Produce requests are placed in a request queue. Responses are placed in a response queue. The number of network threads per listener should reflect the replication factor and the levels of activity from client producers and consumers interacting with the Kafka cluster. If you are going to have a lot of requests, you can increase the number of threads, using the amount of time threads are idle to determine when to add more threads. To reduce congestion and regulate the request traffic, you can limit the number of requests allowed in the request queue. When the request queue is full, all incoming traffic is blocked. I/O threads pick up requests from the request queue to process them. Adding more threads can improve throughput, but the number of CPU cores and disk bandwidth imposes a practical upper limit. At a minimum, the number of I/O threads should equal the number of storage volumes. # ... num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=4 4 # ... 1 The number of network threads for the Kafka cluster. 2 The number of requests allowed in the request queue. 3 The number of I/O threads for a Kafka broker. 4 The number of threads used for log loading at startup and flushing at shutdown. Try setting to a value of at least the number of cores. Configuration updates to the thread pools for all brokers might occur dynamically at the cluster level. These updates are restricted to between half the current size and twice the current size. Tip The following Kafka broker metrics can help with working out the number of threads required: kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent provides metrics on the average time network threads are idle as a percentage. kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent provides metrics on the average time I/O threads are idle as a percentage. If there is 0% idle time, all resources are in use, which means that adding more threads might be beneficial. When idle time goes below 30%, performance may start to suffer. If threads are slow or limited due to the number of disks, you can try increasing the size of the buffers for network requests to improve throughput: # ... replica.socket.receive.buffer.bytes=65536 # ... And also increase the maximum number of bytes Kafka can receive: # ... socket.request.max.bytes=104857600 # ... 3.5. Increasing bandwidth for high latency connections Kafka batches data to achieve reasonable throughput over high-latency connections from Kafka to clients, such as connections between datacenters. However, if high latency is a problem, you can increase the size of the buffers for sending and receiving messages. # ... socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 # ... You can estimate the optimal size of your buffers using a bandwidth-delay product calculation, which multiplies the maximum bandwidth of the link (in bytes/s) with the round-trip delay (in seconds) to give an estimate of how large a buffer is required to sustain maximum throughput. 3.6. Managing logs with data retention policies Kafka uses logs to store message data. Logs are a series of segments associated with various indexes. New messages are written to an active segment, and never subsequently modified. Segments are read when serving fetch requests from consumers. Periodically, the active segment is rolled to become read-only and a new active segment is created to replace it. There is only a single segment active at a time. Older segments are retained until they are eligible for deletion. Configuration at the broker level sets the maximum size in bytes of a log segment and the amount of time in milliseconds before an active segment is rolled: # ... log.segment.bytes=1073741824 log.roll.ms=604800000 # ... You can override these settings at the topic level using segment.bytes and segment.ms . Whether you need to lower or raise these values depends on the policy for segment deletion. A larger size means the active segment contains more messages and is rolled less often. Segments also become eligible for deletion less often. You can set time-based or size-based log retention and cleanup policies so that logs are kept manageable. Depending on your requirements, you can use log retention configuration to delete old segments. If log retention policies are used, non-active log segments are removed when retention limits are reached. Deleting old segments bounds the storage space required for the log so you do not exceed disk capacity. For time-based log retention, you set a retention period based on hours, minutes and milliseconds. The retention period is based on the time messages were appended to the segment. The milliseconds configuration has priority over minutes, which has priority over hours. The minutes and milliseconds configuration is null by default, but the three options provide a substantial level of control over the data you wish to retain. Preference should be given to the milliseconds configuration, as it is the only one of the three properties that is dynamically updateable. # ... log.retention.ms=1680000 # ... If log.retention.ms is set to -1, no time limit is applied to log retention, so all logs are retained. Disk usage should always be monitored, but the -1 setting is not generally recommended as it can lead to issues with full disks, which can be hard to rectify. For size-based log retention, you set a maximum log size (of all segments in the log) in bytes: # ... log.retention.bytes=1073741824 # ... In other words, a log will typically have approximately log.retention.bytes/log.segment.bytes segments once it reaches a steady state. When the maximum log size is reached, older segments are removed. A potential issue with using a maximum log size is that it does not take into account the time messages were appended to a segment. You can use time-based and size-based log retention for your cleanup policy to get the balance you need. Whichever threshold is reached first triggers the cleanup. If you wish to add a time delay before a segment file is deleted from the system, you can add the delay using log.segment.delete.delay.ms for all topics at the broker level or file.delete.delay.ms for specific topics in the topic configuration. # ... log.segment.delete.delay.ms=60000 # ... 3.7. Removing log data with cleanup policies The method of removing older log data is determined by the log cleaner configuration. The log cleaner is enabled for the broker by default: # ... log.cleaner.enable=true # ... The log cleaner needs to be enabled if you are using log compaction cleanup policy. You can set the cleanup policy at the topic or broker level. Broker-level configuration is the default for topics that do not have policy set. You can set policy to delete logs, compact logs, or do both: # ... log.cleanup.policy=compact,delete # ... The delete policy corresponds to managing logs with data retention policies. It is suitable when data does not need to be retained forever. The compact policy guarantees to keep the most recent message for each message key. Log compaction is suitable where message values are changeable, and you want to retain the latest update. If cleanup policy is set to delete logs, older segments are deleted based on log retention limits. Otherwise, if the log cleaner is not enabled, and there are no log retention limits, the log will continue to grow. If cleanup policy is set for log compaction, the head of the log operates as a standard Kafka log, with writes for new messages appended in order. In the tail of a compacted log, where the log cleaner operates, records will be deleted if another record with the same key occurs later in the log. Messages with null values are also deleted. If you're not using keys, you can't use compaction because keys are needed to identify related messages. While Kafka guarantees that the latest messages for each key will be retained, it does not guarantee that the whole compacted log will not contain duplicates. Figure 3.1. Log showing key value writes with offset positions before compaction Using keys to identify messages, Kafka compaction keeps the latest message (with the highest offset) for a specific message key, eventually discarding earlier messages that have the same key. In other words, the message in its latest state is always available and any out-of-date records of that particular message are eventually removed when the log cleaner runs. You can restore a message back to a state. Records retain their original offsets even when surrounding records get deleted. Consequently, the tail can have non-contiguous offsets. When consuming an offset that's no longer available in the tail, the record with the higher offset is found. Figure 3.2. Log after compaction If you choose only a compact policy, your log can still become arbitrarily large. In which case, you can set policy to compact and delete logs. If you choose to compact and delete, first the log data is compacted, removing records with a key in the head of the log. After which, data that falls before the log retention threshold is deleted. Figure 3.3. Log retention point and compaction point You set the frequency the log is checked for cleanup in milliseconds: # ... log.retention.check.interval.ms=300000 # ... Adjust the log retention check interval in relation to the log retention settings. Smaller retention sizes might require more frequent checks. The frequency of cleanup should be often enough to manage the disk space, but not so often it affects performance on a topic. You can also set a time in milliseconds to put the cleaner on standby if there are no logs to clean: # ... log.cleaner.backoff.ms=15000 # ... If you choose to delete older log data, you can set a period in milliseconds to retain the deleted data before it is purged: # ... log.cleaner.delete.retention.ms=86400000 # ... The deleted data retention period gives time to notice the data is gone before it is irretrievably deleted. To delete all messages related to a specific key, a producer can send a tombstone message. A tombstone has a null value and acts as a marker to tell a consumer the value is deleted. After compaction, only the tombstone is retained, which must be for a long enough period for the consumer to know that the message is deleted. When older messages are deleted, having no value, the tombstone key is also deleted from the partition. 3.8. Managing disk utilization There are many other configuration settings related to log cleanup, but of particular importance is memory allocation. The deduplication property specifies the total memory for cleanup across all log cleaner threads. You can set an upper limit on the percentage of memory used through the buffer load factor. # ... log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9 # ... Each log entry uses exactly 24 bytes, so you can work out how many log entries the buffer can handle in a single run and adjust the setting accordingly. If possible, consider increasing the number of log cleaner threads if you are looking to reduce the log cleaning time: # ... log.cleaner.threads=8 # ... If you are experiencing issues with 100% disk bandwidth usage, you can throttle the log cleaner I/O so that the sum of the read/write operations is less than a specified double value based on the capabilities of the disks performing the operations: # ... log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 # ... 3.9. Handling large message sizes The default batch size for messages is 1MB, which is optimal for maximum throughput in most use cases. Kafka can accommodate larger batches at a reduced throughput, assuming adequate disk capacity. Large message sizes are handled in four ways: Producer-side message compression writes compressed messages to the log. Reference-based messaging sends only a reference to data stored in some other system in the message's value. Inline messaging splits messages into chunks that use the same key, which are then combined on output using a stream-processor like Kafka Streams. Broker and producer/consumer client application configuration built to handle larger message sizes. The reference-based messaging and message compression options are recommended and cover most situations. With any of these options, care must be take to avoid introducing performance issues. Producer-side compression For producer configuration, you specify a compression.type , such as Gzip, which is then applied to batches of data generated by the producer. Using the broker configuration compression.type=producer , the broker retains whatever compression the producer used. Whenever producer and topic compression do not match, the broker has to compress batches again prior to appending them to the log, which impacts broker performance. Compression also adds additional processing overhead on the producer and decompression overhead on the consumer, but includes more data in a batch, so is often beneficial to throughput when message data compresses well. Combine producer-side compression with fine-tuning of the batch size to facilitate optimum throughput. Using metrics helps to gauge the average batch size needed. Reference-based messaging Reference-based messaging is useful for data replication when you do not know how big a message will be. The external data store must be fast, durable, and highly available for this configuration to work. Data is written to the data store and a reference to the data is returned. The producer sends a message containing the reference to Kafka. The consumer gets the reference from the message and uses it to fetch the data from the data store. Figure 3.4. Reference-based messaging flow As the message passing requires more trips, end-to-end latency will increase. Another significant drawback of this approach is there is no automatic clean up of the data in the external system when the Kafka message gets cleaned up. A hybrid approach would be to only send large messages to the data store and process standard-sized messages directly. Inline messaging Inline messaging is complex, but it does not have the overhead of depending on external systems like reference-based messaging. The producing client application has to serialize and then chunk the data if the message is too big. The producer then uses the Kafka ByteArraySerializer or similar to serialize each chunk again before sending it. The consumer tracks messages and buffers chunks until it has a complete message. The consuming client application receives the chunks, which are assembled before deserialization. Complete messages are delivered to the rest of the consuming application in order according to the offset of the first or last chunk for each set of chunked messages. Successful delivery of the complete message is checked against offset metadata to avoid duplicates during a rebalance. Figure 3.5. Inline messaging flow Inline messaging has a performance overhead on the consumer side because of the buffering required, particularly when handling a series of large messages in parallel. The chunks of large messages can become interleaved, so that it is not always possible to commit when all the chunks of a message have been consumed if the chunks of another large message in the buffer are incomplete. For this reason, the buffering is usually supported by persisting message chunks or by implementing commit logic. Configuration to handle larger messages If larger messages cannot be avoided, and to avoid blocks at any point of the message flow, you can increase message limits. To do this, configure message.max.bytes at the topic level to set the maximum record batch size for individual topics. If you set message.max.bytes at the broker level, larger messages are allowed for all topics. The broker will reject any message that is greater than the limit set with message.max.bytes . The buffer size for the producers ( max.request.size ) and consumers ( message.max.bytes ) must be able to accommodate the larger messages. 3.10. Controlling the log flush of message data Generally, the recommendation is to not set explicit flush thresholds and let the operating system perform background flush using its default settings. Partition replication provides greater data durability than writes to any single disk, as a failed broker can recover from its in-sync replicas. Log flush properties control the periodic writes of cached message data to disk. The scheduler specifies the frequency of checks on the log cache in milliseconds: # ... log.flush.scheduler.interval.ms=2000 # ... You can control the frequency of the flush based on the maximum amount of time that a message is kept in-memory and the maximum number of messages in the log before writing to disk: # ... log.flush.interval.ms=50000 log.flush.interval.messages=100000 # ... The wait between flushes includes the time to make the check and the specified interval before the flush is carried out. Increasing the frequency of flushes can affect throughput. If you are using application flush management, setting lower flush thresholds might be appropriate if you are using faster disks. 3.11. Partition rebalancing for availability Partitions can be replicated across brokers for fault tolerance. For a given partition, one broker is elected leader and handles all produce requests (writes to the log). Partition followers on other brokers replicate the partition data of the partition leader for data reliability in the event of the leader failing. Followers do not normally serve clients, though rack configuration allows a consumer to consume messages from the closest replica when a Kafka cluster spans multiple datacenters. Followers operate only to replicate messages from the partition leader and allow recovery should the leader fail. Recovery requires an in-sync follower. Followers stay in sync by sending fetch requests to the leader, which returns messages to the follower in order. The follower is considered to be in sync if it has caught up with the most recently committed message on the leader. The leader checks this by looking at the last offset requested by the follower. An out-of-sync follower is usually not eligible as a leader should the current leader fail, unless unclean leader election is allowed . You can adjust the lag time before a follower is considered out of sync: # ... replica.lag.time.max.ms=30000 # ... Lag time puts an upper limit on the time to replicate a message to all in-sync replicas and how long a producer has to wait for an acknowledgment. If a follower fails to make a fetch request and catch up with the latest message within the specified lag time, it is removed from in-sync replicas. You can reduce the lag time to detect failed replicas sooner, but by doing so you might increase the number of followers that fall out of sync needlessly. The right lag time value depends on both network latency and broker disk bandwidth. When a leader partition is no longer available, one of the in-sync replicas is chosen as the new leader. The first broker in a partition's list of replicas is known as the preferred leader. By default, Kafka is enabled for automatic partition leader rebalancing based on a periodic check of leader distribution. That is, Kafka checks to see if the preferred leader is the current leader. A rebalance ensures that leaders are evenly distributed across brokers and brokers are not overloaded. You can use Cruise Control for AMQ Streams to figure out replica assignments to brokers that balance load evenly across the cluster. Its calculation takes into account the differing load experienced by leaders and followers. A failed leader affects the balance of a Kafka cluster because the remaining brokers get the extra work of leading additional partitions. For the assignment found by Cruise Control to actually be balanced it is necessary that partitions are lead by the preferred leader. Kafka can automatically ensure that the preferred leader is being used (where possible), changing the current leader if necessary. This ensures that the cluster remains in the balanced state found by Cruise Control. You can control the frequency, in seconds, of the rebalance check and the maximum percentage of imbalance allowed for a broker before a rebalance is triggered. #... auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #... The percentage leader imbalance for a broker is the ratio between the current number of partitions for which the broker is the current leader and the number of partitions for which it is the preferred leader. You can set the percentage to zero to ensure that preferred leaders are always elected, assuming they are in sync. If the checks for rebalances need more control, you can disable automated rebalances. You can then choose when to trigger a rebalance using the kafka-leader-election.sh command line tool. Note The Grafana dashboards provided with AMQ Streams show metrics for under-replicated partitions and partitions that do not have an active leader. 3.12. Unclean leader election Leader election to an in-sync replica is considered clean because it guarantees no loss of data. And this is what happens by default. But what if there is no in-sync replica to take on leadership? Perhaps the ISR (in-sync replica) only contained the leader when the leader's disk died. If a minimum number of in-sync replicas is not set, and there are no followers in sync with the partition leader when its hard drive fails irrevocably, data is already lost. Not only that, but a new leader cannot be elected because there are no in-sync followers. You can configure how Kafka handles leader failure: # ... unclean.leader.election.enable=false # ... Unclean leader election is disabled by default, which means that out-of-sync replicas cannot become leaders. With clean leader election, if no other broker was in the ISR when the old leader was lost, Kafka waits until that leader is back online before messages can be written or read. Unclean leader election means out-of-sync replicas can become leaders, but you risk losing messages. The choice you make depends on whether your requirements favor availability or durability. You can override the default configuration for specific topics at the topic level. If you cannot afford the risk of data loss, then leave the default configuration. 3.13. Avoiding unnecessary consumer group rebalances For consumers joining a new consumer group, you can add a delay so that unnecessary rebalances to the broker are avoided: # ... group.initial.rebalance.delay.ms=3000 # ... The delay is the amount of time that the coordinator waits for members to join. The longer the delay, the more likely it is that all the members will join in time and avoid a rebalance. But the delay also prevents the group from consuming until the period has ended.
[ "num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000", "num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576", "auto.create.topics.enable=false delete.topic.enable=true", "transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2", "offsets.topic.num.partitions=50 offsets.topic.replication.factor=3", "num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=4 4", "replica.socket.receive.buffer.bytes=65536", "socket.request.max.bytes=104857600", "socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576", "log.segment.bytes=1073741824 log.roll.ms=604800000", "log.retention.ms=1680000", "log.retention.bytes=1073741824", "log.segment.delete.delay.ms=60000", "log.cleaner.enable=true", "log.cleanup.policy=compact,delete", "log.retention.check.interval.ms=300000", "log.cleaner.backoff.ms=15000", "log.cleaner.delete.retention.ms=86400000", "log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9", "log.cleaner.threads=8", "log.cleaner.io.max.bytes.per.second=1.7976931348623157E308", "log.flush.scheduler.interval.ms=2000", "log.flush.interval.ms=50000 log.flush.interval.messages=100000", "replica.lag.time.max.ms=30000", "# auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #", "unclean.leader.election.enable=false", "group.initial.rebalance.delay.ms=3000" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/kafka_configuration_tuning/con-broker-config-properties-str
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_storage_operations/proc_providing-feedback-on-red-hat-documentation
2.8. Configuration History Views
2.8. Configuration History Views To query a configuration view, run SELECT * FROM view_name ; . For example: To list all available views, run: Note delete_date does not appear in latest views because these views provide the latest configuration of living entities, which, by definition, have not been deleted. 2.8.1. Data Center Configuration The following table shows the configuration history parameters of the data centers in the system. Table 2.8. v4_2_configuration_history_datacenters Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of datacenter_configuration_version in the v4_2_configuration_history_clusters view and it can be used to join them. No datacenter_id uuid The unique ID of the data center in the system. Yes datacenter_name character varying(40) Name of the data center, as displayed in the edit dialog. No datacenter_description character varying(4000) Description of the data center, as displayed in the edit dialog. No is_local_storage boolean A flag to indicate whether the data center uses local storage. No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was changed in the system. No delete_date timestamp with time zone The date this entity was deleted from the system. No 2.8.2. Data Center Storage Domain Map The following table shows the relationships between storage domains and data centers in the system. Table 2.9. v4_2_map_history_datacenters_storage_domains Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of datacenter_configuration_version in the v4_2_configuration_history_clusters view and it can be used to join them. No storage_domain_id uuid The unique ID of this storage domain in the system. Yes datacenter_id uuid The unique ID of the data center in the system. No attach_date timestamp with time zone The date the storage domain was attached to the data center. No detach_date timestamp with time zone The date the storage domain was detached from the data center. No 2.8.3. Storage Domain Configuration The following table shows the configuration history parameters of the storage domains in the system. Table 2.10. v4_2_configuration_history_storage_domains Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of storage_configuration_version in the storage domain statistics views and it can be used to join them. No storage_domain_id uuid The unique ID of this storage domain in the system. Yes storage_domain_name character varying(250) Storage domain name. No storage_domain_type smallint * 0 - Data (Master) * 1 - Data * 2 - ISO * 3 - Export No storage_type smallint * 0 - Unknown * 1 - NFS * 2 - FCP * 3 - iSCSI * 4 - Local * 6 - All No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was changed in the system. No delete_date timestamp with time zone The date this entity was deleted from the system. No 2.8.4. Cluster Configuration The following table shows the configuration history parameters of the clusters in the system. Table 2.11. v4_2_configuration_history_clusters Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of cluster_configuration_version in the v4_2_configuration_history_hosts and v4_2_configuration_history_vms views and it can be used to join them. No cluster_id uuid The unique identifier of the datacenter this cluster resides in. Yes cluster_name character varying(40) Name of the cluster, as displayed in the edit dialog. No cluster_description character varying(4000) As defined in the edit dialog. No datacenter_id uuid The unique identifier of the datacenter this cluster resides in. Yes cpu_name character varying(255) As displayed in the edit dialog. No compatibility_version character varying(40) As displayed in the edit dialog. No datacenter_configuration_version integer The data center configuration version at the time of creation or update. The data center configuration version at the time of creation or update. This is identical to the value of history_id in the v4_2_configuration_history_datacenters view and it can be used to join them. No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was changed in the system. No delete_date timestamp with time zone The date this entity was deleted from the system. No 2.8.5. Host Configuration The following table shows the configuration history parameters of the hosts in the system. Table 2.12. v4_2_configuration_history_hosts Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of host_configuration_version in the host statistics views and it can be used to join them. No host_id uuid The unique ID of the host in the system. Yes host_unique_id character varying(128) This field is a combination of the host's physical UUID and one of its MAC addresses, and is used to detect hosts already registered in the system. No host_name character varying(255) Name of the host (same as in the edit dialog). No cluster_id uuid The unique ID of the cluster that this host belongs to. Yes host_type smallint * 0 - RHEL Host * 2 - RHEV Hypervisor Node No fqdn_or_ip character varying(255) The host's DNS name or its IP address for Red Hat Virtualization Manager to communicate with (as displayed in the edit dialog). No memory_size_mb integer The host's physical memory capacity, expressed in megabytes (MB). No swap_size_mb integer The host swap partition size. No cpu_model character varying(255) The host's CPU model. No number_of_cores smallint Total number of CPU cores in the host. No number_of_sockets smallint Total number of CPU sockets. No cpu_speed_mh numeric(18,0) The host's CPU speed, expressed in megahertz (MHz). No host_os character varying(255) The host's operating system version. No kernel_version character varying(255) The host's kernel version. No kvm_version character varying(255) The host's KVM version. No vdsm_version character varying The host's VDSM version. No vdsm_port integer As displayed in the edit dialog. No threads_per_core smallint Total number of threads per core. No hardware_manufacturer character varying(255) The host's hardware manufacturer. No hardware_product_name character varying(255) The product name of the host's hardware. No hardware_version character varying(255) The version of the host's hardware. No hardware_serial_number character varying(255) The serial number of the host's hardware. No cluster_configuration_version integer The cluster configuration version at the time of creation or update. This is identical to the value of history_id in the v4_2_configuration_history_clusters view and it can be used to join them. No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was changed in the system. No delete_date timestamp with time zone The date this entity was deleted from the system. No 2.8.6. Host Interface Configuration The following table shows the configuration history parameters of the host interfaces in the system. Table 2.13. v4_2_configuration_history_hosts_interfaces Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of host_interface_configuration_version in the host interface statistics views and it can be used to join them. No host_interface_id uuid The unique ID of this interface in the system. Yes host_interface_name character varying(50) The interface name as reported by the host. No host_id uuid Unique ID of the host this interface belongs to. Yes host_interface_type smallint * 0 - rt18139_pv * 1 - rt18139 * 2 - e1000 * 3 - pv No host_interface_speed_bps integer The interface speed in bits per second. No mac_address character varying(59) The interface MAC address. No logical_network_name character varying(50) The logical network associated with the interface. No ip_address character varying(20) As displayed in the edit dialog. No gateway character varying(20) As displayed in the edit dialog. No bond boolean A flag to indicate if this interface is a bonded interface. No bond_name character varying(50) The name of the bond this interface is part of (if it is part of a bond). No vlan_id integer As displayed in the edit dialog. No host_configuration_version integer The host configuration version at the time of creation or update. This is identical to the value of history_id in the v4_2_configuration_history_hosts view and it can be used to join them. No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was changed in the system. No delete_date timestamp with time zone The date this entity was deleted from the system. No 2.8.7. Virtual Machine Configuration The following table shows the configuration history parameters of the virtual machines in the system. Table 2.14. v4_2_configuration_history_vms Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of vm_configuration_version in the virtual machine statistics views and it can be used to join them. No vm_id uuid The unique ID of this virtual machine in the system. Yes vm_name character varying(255) The name of the virtual machine. No vm_description character varying(4000) As displayed in the edit dialog. No vm_type smallint * 0 - Desktop * 1 - Server No cluster_id uuid The unique ID of the cluster this virtual machine belongs to. Yes template_id uuid The unique ID of the template this virtual machine is derived from. Templates are not synchronized to the history database in this version of Red Hat Virtualization. No template_name character varying(40) Name of the template from which this virtual machine is derived. No cpu_per_socket smallint Virtual CPUs per socket. No number_of_sockets smallint Total number of virtual CPU sockets. No memory_size_mb integer Total memory allocated to the virtual machine, expressed in megabytes (MB). No operating_system smallint * 0 - Other OS * 1 - Windows XP * 3 - Windows 2003 * 4 - Windows 2008 * 5 - Linux * 7 - Red Hat Enterprise Linux 5.x * 8 - Red Hat Enterprise Linux 4.x * 9 - Red Hat Enterprise Linux 3.x * 10 - Windows 2003 x64 * 11 - Windows 7 * 12 - Windows 7 x64 * 13 - Red Hat Enterprise Linux 5.x x64 * 14 - Red Hat Enterprise Linux 4.x x64 * 15 - Red Hat Enterprise Linux 3.x x64 * 16 - Windows 2008 x64 * 17 - Windows 2008 R2 x64 * 18 - Red Hat Enterprise Linux 6.x * 19 - Red Hat Enterprise Linux 6.x x64 * 20 - Windows 8 * 21 - Windows 8 x64 * 23 - Windows 2012 x64 * 1001 - Other * 1002 - Linux * 1003 - Red Hat Enterprise Linux 6.x * 1004 - SUSE Linux Enterprise Server 11 * 1193 - SUSE Linux Enterprise Server 11 * 1252 - Ubuntu Precise Pangolin LTS * 1253 - Ubuntu Quantal Quetzal * 1254 - Ubuntu Raring Ringtails * 1255 - Ubuntu Saucy Salamander No default_host uuid As displayed in the edit dialog, the ID of the default host in the system. No high_availability boolean As displayed in the edit dialog. No initialized boolean A flag to indicate if this virtual machine was started at least once for Sysprep initialization purposes. No stateless boolean As displayed in the edit dialog. No fail_back boolean As displayed in the edit dialog. No usb_policy smallint As displayed in the edit dialog. No time_zone character varying(40) As displayed in the edit dialog. No vm_pool_id uuid The ID of the pool to which this virtual machine belongs. No vm_pool_name character varying(255) The name of the virtual machine's pool. No created_by_user_id uuid The ID of the user that created this virtual machine. No cluster_configuration_version integer The cluster configuration version at the time of creation or update. This is identical to the value of history_id in the v4_2_configuration_history_clusters view and it can be used to join them. No default_host_configuration_version integer The host configuration version at the time of creation or update. This is identical to the value of history_id in the v4_2_configuration_history_hosts view and it can be used to join them. No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was changed in the system. No delete_date timestamp with time zone The date this entity was deleted from the system. No 2.8.8. Virtual Machine Interface Configuration The following table shows the configuration history parameters of the virtual interfaces in the system. Table 2.15. v4_2_configuration_history_vms_interfaces Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of vm_interface_configuration_version in the virtual machine interface statistics view and it can be used to join them. No vm_id uuid Unique ID of the virtual machine in the system. Yes vm_interface_id uuid The unique ID of this interface in the system. Yes vm_interface_name character varying(50) As displayed in the edit dialog. No vm_interface_type smallint The type of the virtual interface. * 0 - rt18139_pv * 1 - rt18139 * 2 - e1000 * 3 - pv No vm_interface_speed_bps integer The average speed of the interface during the aggregation in bits per second. No mac_address character varying(20) As displayed in the edit dialog. No logical_network_name character varying(50) As displayed in the edit dialog. No vm_configuration_version integer The virtual machine configuration version at the time of creation or update. This is identical to the value of history_id in the v4_2_configuration_history_vms view and it can be used to join them. No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was changed in the system. No delete_date timestamp with time zone The date this entity was deleted from the system. No 2.8.9. Virtual Machine Device Configuration The following table shows the relationships between virtual machines and their associated devices, including disks and virtual interfaces. Table 2.16. v4_2_configuration_history_vms_devices Name Type Description Indexed history_id integer The ID of the configuration version in the history database. No vm_id uuid The unique ID of the virtual machine in the system. Yes device_id uuid The unique ID of the device in the system. No type character varying(30) The type of virtual machine device. This can be "disk" or "interface". Yes address character varying(255) The device's physical address. No is_managed boolean Flag that indicates if the device is managed by the Manager. No is_plugged boolean Flag that indicates if the device is plugged into the virtual machine. No is_readonly boolean Flag that indicates if the device is read only. No vm_configuration_version integer The virtual machine configuration version at the time the sample was taken. No device_configuration_version integer The device configuration version at the time the sample was taken. - If the value of the type field is set to interface , this field is joined with the history_id field in the v4_2_configuration_history_vms_interfaces view. - If the value of the type field is set to disk, this field is joined with the history_id field in the v4_2_configuration_history_vms_disks view. No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was added to the system. No delete_date timestamp with time zone The date this entity was added to the system. No 2.8.10. Virtual Disk Configuration The following table shows the configuration history parameters of the virtual disks in the system. Table 2.17. v4_2_configuration_history_vms_disks Name Type Description Indexed history_id integer The ID of the configuration version in the history database. This is identical to the value of vm_disk_configuration_version in the virtual disks statistics views and it can be used to join them. No vm_disk_id uuid The unique ID of this disk in the system. Yes vm_disk_name text The name of the virtual disk, as displayed in the edit dialog. No vm_disk_description character varying(500) As displayed in the edit dialog. No image_id uuid The unique ID of the image in the system. No storage_domain_id uuid The ID of the storage domain this disk image belongs to. Yes vm_disk_size_mb integer The defined size of the disk in megabytes (MB). No vm_disk_type smallint As displayed in the edit dialog. Only System and Data are currently used. * 0 - Unassigned * 1 - System * 2 - Data * 3 - Shared * 4 - Swap * 5 - Temp No vm_disk_format smallint As displayed in the edit dialog. * 3 - Unassigned * 4 - COW * 5 - Raw No is_shared boolean Flag that indicates if the virtual machine's disk is shared. No create_date timestamp with time zone The date this entity was added to the system. No update_date timestamp with time zone The date this entity was changed in the system. No delete_date timestamp with time zone The date this entity was deleted from the system. No 2.8.11. User Details History The following table shows the configuration history parameters of the users in the system. Table 2.18. v4_2_users_details_history Name Type Description user_id uuid The unique ID of the user in the system, as generated by the Manager. first_name character varying(255) The user's first name. last_name character varying(255) The user's last name. domain character varying(255) The name of the authorization extension. username character varying(255) The account name. department character varying(255) The organizational department the user belongs to. user_role_title character varying(255) The title or role of the user within the organization. email character varying(255) The email of the user in the organization. external_id text The unique identifier of the user from the external system. active boolean A flag to indicate if the user is active or not. This is checked hourly. If the user can be found in the authorization extension then it will remain active. A user becomes active on successful login. create_date timestamp with time zone The date this entity was added to the system. update_date timestamp with time zone The date this entity was changed in the system. delete_date timestamp with time zone The date this entity was deleted from the system.
[ "SELECT * FROM v4_3_configuration_history_datacenters;", "\\dv" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/data_warehouse_guide/sect-configuration_history_views
4.116. kdebase-workspace
4.116. kdebase-workspace 4.116.1. RHBA-2011:1115 - kdebase-workspace bug fix update Updated kdebase-workspace packages that fix several bugs are now available for Red Hat Enterprise Linux 6. KDE is a graphical desktop environment for the X Window System. The kdebase-workspace packages contains utilities for basic operations with the desktop environment. It allows users for example, to change system settings, resize and rotate X screens or set panels and widgets on the workspace. Bug Fixes BZ# 587917 If the KDE and GNOME desktop environments were both installed on one system, two System Monitor utilities were installed as well. These, located in System Tools of the Applications menu, had the same icons and title, which may have confused the user. With this update, KDE icons are used for the ksysguard tool. BZ# 639359 Prior to this update, the ksysguard process terminated unexpectedly with a segmentation fault after clicking the OK button in the Properties dialog of the Network History tab, which is included in the ksysguard application. This bug has been fixed in this update so that ksysguard no longer crashes and works properly. BZ# 649345 Previously, when rebooting the system, the kdm utility terminated with a segmentation fault if auto-login was enabled. This was caused by a NULL password being sent to the master process, which has been fixed, and rebooting the system with auto-login enabled no longer causes kdm to crash. BZ# 666295 When clicking Help in the Battery Monitor Settings dialog of the Battery Monitor widget, the message "The file or folder help:/plasma-desktop/index.html does not exist" appeared instead of displaying the help pages. This update adds the missing help pages, which fixes the problem. All users of kdebase-workspace are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/kdebase-workspace
Chapter 1. Integrating with image registries
Chapter 1. Integrating with image registries Red Hat Advanced Cluster Security for Kubernetes (RHACS) integrates with a variety of image registries so that you can understand your images and apply security policies for image usage. When you integrate with image registries, you can view important image details, such as image creation date and Dockerfile details (including image layers). After you integrate RHACS with your registry, you can scan images, view image components, and apply security policies to images before or after deployment. Note When you integrate with an image registry, RHACS does not scan all images in your registry. RHACS only scans the images when you: Use the images in deployments Use the roxctl CLI to check images Use a continuous integration (CI) system to enforce security policies You can integrate RHACS with major image registries, including: Amazon Elastic Container Registry (ECR) Docker Hub Google Container Registry (GCR) Google Artifact Registry IBM Cloud Container Registry (ICR) JFrog Artifactory Microsoft Azure Container Registry (ACR) Red Hat Quay Red Hat container registries Sonatype Nexus Any other registry that uses the Docker Registry HTTP API 1.1. Automatic configuration Red Hat Advanced Cluster Security for Kubernetes includes default integrations with standard registries, such as Docker Hub and others. It can also automatically configure integrations based on artifacts found in the monitored clusters, such as image pull secrets. Usually, you do not need to configure registry integrations manually. Important If you use a Google Container Registry (GCR), Red Hat Advanced Cluster Security for Kubernetes does not create a registry integration automatically. If you use Red Hat Advanced Cluster Security Cloud Service, automatic configuration is unavailable, and you must manually create registry integrations. 1.2. Amazon ECR integrations For Amazon ECR integrations, Red Hat Advanced Cluster Security for Kubernetes automatically generates ECR registry integrations if the following conditions are met: The cloud provider for the cluster is AWS. The nodes in your cluster have an Instance Identity and Access Management (IAM) Role association and the Instance Metadata Service is available in the nodes. For example, when using Amazon Elastic Kubernetes Service (EKS) to manage your cluster, this role is known as the EKS Node IAM role. The Instance IAM role has IAM policies granting access to the ECR registries from which you are deploying. If the listed conditions are met, Red Hat Advanced Cluster Security for Kubernetes monitors deployments that pull from ECR registries and automatically generates ECR integrations for them. You can edit these integrations after they are automatically generated. 1.3. Manually configuring image registries If you are using GCR, you must manually create image registry integrations. 1.3.1. Manually configuring OpenShift Container Platform registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with OpenShift Container Platform built-in container image registry. Prerequisites You need a username and a password for authentication with the OpenShift Container Platform registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Generic Docker Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.2. Manually configuring Amazon Elastic Container Registry You can use Red Hat Advanced Cluster Security for Kubernetes to create and modify Amazon Elastic Container Registry (ECR) integrations manually. If you are deploying from Amazon ECR, integrations for the Amazon ECR registries are usually automatically generated. However, you might want to create integrations on your own to scan images outside deployments. You can also modify the parameters of an automatically-generated integration. For example, you can change the authentication method used by an automatically-generated Amazon ECR integration to use AssumeRole authentication or other authorization models. Important To erase changes you made to an automatically-generated ECR integration, delete the integration, and Red Hat Advanced Cluster Security for Kubernetes creates a new integration for you with the automatically-generated parameters when you deploy images from Amazon ECR. Prerequisites You must have an Amazon Identity and Access Management (IAM) access key ID and a secret access key. Alternatively, you can use a node-level IAM proxy such as kiam or kube2iam . The access key must have read access to ECR. See How do I create an AWS access key? for more information. If you are running Red Hat Advanced Cluster Security for Kubernetes in Amazon Elastic Kubernetes Service (EKS) and want to integrate with an ECR from a separate Amazon account, you must first set a repository policy statement in your ECR. Follow the instructions at Setting a repository policy statement and for Actions , choose the following scopes of the Amazon ECR API operations: ecr:BatchCheckLayerAvailability ecr:BatchGetImage ecr:DescribeImages ecr:GetDownloadUrlForLayer ecr:ListImages Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Amazon ECR . Click New integration , or click one of the automatically-generated integrations to open it, then click Edit . Enter or modify the details for the following fields: Update stored credentials : Clear this box if you are modifying an integration without updating the credentials such as access keys and passwords. Integration name : The name of the integration. Registry ID : The ID of the registry. Endpoint : The address of the registry. This value is required only if you are using a private virtual private cloud (VPC) endpoint for Amazon ECR. This field is not enabled when the AssumeRole option is selected. Region : The region for the registry; for example, us-west-1 . If you are using IAM, select Use Container IAM role . Otherwise, clear the Use Container IAM role box and enter the Access key ID and Secret access key . If you are using AssumeRole authentication, select Use AssumeRole and enter the details for the following fields: AssumeRole ID : The ID of the role to assume. AssumeRole External ID (optional): If you are using an external ID with AssumeRole , you can enter it here. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.2.1. Using assumerole with Amazon ECR You can use AssumeRole to grant access to AWS resources without manually configuring each user's permissions. Instead, you can define a role with the desired permissions so that the user is granted access to assume that role. AssumeRole enables you to grant, revoke, or otherwise generally manage more fine-grained permissions. 1.3.2.1.1. Configuring AssumeRole with container IAM Before you can use AssumeRole with Red Hat Advanced Cluster Security for Kubernetes, you must first configure it. Procedure Enable the IAM OIDC provider for your EKS cluster: USD eksctl utils associate-iam-oidc-provider --cluster <cluster name> --approve Create an IAM role for your EKS cluster. Associate the newly created role with a service account: USD kubectl -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role-name> Restart Central to apply the changes. USD kubectl -n stackrox delete pod -l app=central Assign the role to a policy that allows the role to assume another role as required: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>" 1 } ] } 1 Replace <assumerole-readonly> with the role you want to assume. Update the trust relationship for the role you want to assume: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ecr-registry>:role/<role-name>" 1 ] }, "Action": "sts:AssumeRole" } ] } 1 The <role-name> should match with the new role you have created earlier. 1.3.2.1.2. Configuring AssumeRole without container IAM To use AssumeRole without container IAM, you must use an access and a secret key to authenticate as an AWS user with programmatic access . Procedure Depending on whether the AssumeRole user is in the same account as the ECR registry or in a different account, you must either: Create a new role with the desired permissions if the user for which you want to assume role is in the same account as the ECR registry. Note When creating the role, you can choose any trusted entity as required. However, you must modify it after creation. Or, you must provide permissions to access the ECR registry and define its trust relationship if the user is in a different account than the ECR registry: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>" 1 } ] } 1 Replace <assumerole-readonly> with the role you want to assume. Configure the trust relationship of the role by including the user ARN under the Principal field: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ecr-registry>:user/<role-name>" ] }, "Action": "sts:AssumeRole" } ] } 1.3.2.1.3. Configuring AssumeRole in RHACS After configuring AssumeRole in ECR, you can integrate Red Hat Advanced Cluster Security for Kubernetes with Amazon Elastic Container Registry (ECR) by using AssumeRole. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Amazon ECR . Click New Integration . Enter the details for the following fields: Integration Name : The name of the integration. Registry ID : The ID of the registry. Region : The region for the registry; for example, us-west-1 . If you are using IAM, select Use container IAM role . Otherwise, clear the Use custom IAM role box and enter the Access key ID and Secret access key . If you are using AssumeRole, select Use AssumeRole and enter the details for the following fields: AssumeRole ID : The ID of the role to assume. AssumeRole External ID (optional): If you are using an external ID with AssumeRole , you can enter it here. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.3. Manually configuring Google Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Google Container Registry (GCR). Prerequisites You need either a workload identity or a service account key for authentication. The associated service account must have access to the registry. See Configuring access control for information about granting users and other projects access to GCR. If you are using GCR Container Analysis , you must also grant the following roles to the service account: Container Analysis Notes Viewer Container Analysis Occurrences Viewer Storage Object Viewer Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Google Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Type : Select Registry . Registry Endpoint : The address of the registry. Project : The Google Cloud project name. Use workload identity : Check to authenticate using a workload identity. Service account key (JSON) : Your service account key for authentication. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.4. Manually configuring Google Artifact Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Google Artifact Registry. Prerequisites You need either a workload identity or a service account key for authentication. The associated service account must have the Artifact Registry Reader Identity and Access Management (IAM) role roles/artifactregistry.reader . Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Google Artifact Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Registry endpoint : The address of the registry. Project : The Google Cloud project name. Use workload identity : Check to authenticate using a workload identity. Service account key (JSON) : Your service account key for authentication. Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.5. Manually configuring Microsoft Azure Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Microsoft Azure Container Registry. Prerequisites You must have a username and a password for authentication. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Microsoft Azure Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.6. Manually configuring JFrog Artifactory You can integrate Red Hat Advanced Cluster Security for Kubernetes with JFrog Artifactory. Prerequisites You must have a username and a password for authentication with JFrog Artifactory. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select JFrog Artifactory . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save . 1.3.7. Manually configuring Quay Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes (RHACS) with Quay Container Registry. You can integrate with Quay by using the following methods: Integrating with the Quay public repository (registry): This method does not require authentication. Integrating with a Quay private registry by using a robot account: This method requires that you create a robot account to use with Quay (recommended). See the Quay documentation for more information. Integrating with Quay to use the Quay scanner rather than the RHACS scanner: This method uses the API and requires an OAuth token for authentication. See "Integrating with Quay Container Registry to scan images" in the "Additional Resources" section. Prerequisites For authentication with a Quay private registry, you need the credentials associated with a robot account or an OAuth token (deprecated). Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Red Hat Quay.io . Click New integration . Enter the Integration name. Enter the Endpoint , or the address of the registry. If you are integrating with the Quay public repository, under Type , select Registry , and then go to the step. If you are integrating with a Quay private registry, under Type , select Registry and enter information in the following fields: Robot username : If you are accessing the registry by using a Quay robot account, enter the user name in the format <namespace>+<accountname> . Robot password : If you are accessing the registry by using a Quay robot account, enter the password for the robot account user name. OAuth token : If you are accessing the registry by using an OAuth token (deprecated), enter it in this field. Optional: If you are not using a TLS certificate when connecting to the registry, select Disable TLS certificate validation (insecure) . Optional: To create the integration without testing, select Create integration without testing . Select Save . Note If you are editing a Quay integration but do not want to update your credentials, verify that Update stored credentials is not selected. 1.4. Additional resources Integrating with Quay Container Registry to scan images 1.4.1. Manually configuring IBM Cloud Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with IBM Cloud Container Registry. Prerequisites You must have an API key for authentication with the IBM Cloud Container Registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select IBM Cloud Container Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. API key . Select Test to test that the integration with the selected registry is working. Select Save . 1.4.2. Manually configuring Red Hat Container Registry You can integrate Red Hat Advanced Cluster Security for Kubernetes with Red Hat Container Registry. Prerequisites You must have a username and a password for authentication with the Red Hat Container Registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integrations section, select Red Hat Registry . Click New integration . Enter the details for the following fields: Integration name : The name of the integration. Endpoint : The address of the registry. Username and Password . Select Create integration without testing to create the integration without testing the connection to the registry. Select Test to test that the integration with the selected registry is working. Select Save .
[ "eksctl utils associate-iam-oidc-provider --cluster <cluster name> --approve", "kubectl -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role-name>", "kubectl -n stackrox delete pod -l app=central", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:role/<role-name>\" 1 ] }, \"Action\": \"sts:AssumeRole\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:user/<role-name>\" ] }, \"Action\": \"sts:AssumeRole\" } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/integrating/integrate-with-image-registries
Chapter 7. Senders and receivers
Chapter 7. Senders and receivers The client uses sender and receiver links to represent channels for delivering messages. Senders and receivers are unidirectional, with a source end for the message origin, and a target end for the message destination. Sources and targets often point to queues or topics on a message broker. Sources are also used to represent subscriptions. 7.1. Creating queues and topics on demand Some message servers support on-demand creation of queues and topics. When a sender or receiver is attached, the server uses the sender target address or the receiver source address to create a queue or topic with a name matching the address. The message server typically defaults to creating either a queue (for one-to-one message delivery) or a topic (for one-to-many message delivery). The client can indicate which it prefers by setting the queue or topic capability on the source or target. To select queue or topic semantics, follow these steps: Configure your message server for automatic creation of queues and topics. This is often the default configuration. Set either the queue or topic capability on your sender target or receiver source, as in the examples below. Example: Sending to a queue created on demand Target target = new Target() { Address = "jobs", Capabilities = new Symbol[] {"queue"} , }; SenderLink sender = new SenderLink(session, "s1", target , null); Example: Receiving from a topic created on demand Source source = new Source() { Address = "notifications", Capabilities = new Symbol[] {"topic"} , }; ReceiverLink receiver = new ReceiverLink(session, "r1", source , null); For more information, see the following examples: QueueSend.cs QueueReceive.cs TopicSend.cs TopicReceive.cs 7.2. Creating durable subscriptions A durable subscription is a piece of state on the remote server representing a message receiver. Ordinarily, message receivers are discarded when a client closes. However, because durable subscriptions are persistent, clients can detach from them and then re-attach later. Any messages received while detached are available when the client re-attaches. Durable subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that the subscription can be recovered. To create a durable subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : Connection conn = new Connection(new Address(connUrl), SaslProfile.Anonymous, new Open() { ContainerId = "client-1" } , null); Configure the receiver source for durability by setting the Durable and ExpiryPolicy properties: Source source = new Source() { Address = "notifications", Durable = 2, ExpiryPolicy = new Symbol("never"), }; Create a receiver with a stable name, such as sub-1 , and apply the source properties: ReceiverLink receiver = new ReceiverLink(session, "sub-1" , source , null); To detach from a subscription, close the connection without explicitly closing the receiver. To terminate the subscription, close the receiver directly. For more information, see the DurableSubscribe.cs example . 7.3. Creating shared subscriptions A shared subscription is a piece of state on the remote server representing one or more message receivers. Because it is shared, multiple clients can consume from the same stream of messages. The client configures a shared subscription by setting the shared capability on the receiver source. Shared subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that multiple client processes can locate the same subscription. If the global capability is set in addition to shared , the receiver name alone is used to identify the subscription. To create a shared subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : Connection conn = new Connection(new Address(connUrl), SaslProfile.Anonymous, new Open() { ContainerId = "client-1" } , null); Configure the receiver source for sharing by setting the shared capability: Source source = new Source() { Address = "notifications", Capabilities = new Symbol[] {"shared"}, }; Create a receiver with a stable name, such as sub-1 , and apply the source properties: ReceiverLink receiver = new ReceiverLink(session, "sub-1" , source , null); For more information, see the SharedSubscribe.cs example .
[ "Target target = new Target() { Address = \"jobs\", Capabilities = new Symbol[] {\"queue\"} , }; SenderLink sender = new SenderLink(session, \"s1\", target , null);", "Source source = new Source() { Address = \"notifications\", Capabilities = new Symbol[] {\"topic\"} , }; ReceiverLink receiver = new ReceiverLink(session, \"r1\", source , null);", "Connection conn = new Connection(new Address(connUrl), SaslProfile.Anonymous, new Open() { ContainerId = \"client-1\" } , null);", "Source source = new Source() { Address = \"notifications\", Durable = 2, ExpiryPolicy = new Symbol(\"never\"), };", "ReceiverLink receiver = new ReceiverLink(session, \"sub-1\" , source , null);", "Connection conn = new Connection(new Address(connUrl), SaslProfile.Anonymous, new Open() { ContainerId = \"client-1\" } , null);", "Source source = new Source() { Address = \"notifications\", Capabilities = new Symbol[] {\"shared\"}, };", "ReceiverLink receiver = new ReceiverLink(session, \"sub-1\" , source , null);" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_.net_client/senders_and_receivers
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/providing-feedback-on-red-hat-documentation_rhodf
Chapter 6. Device drivers
Chapter 6. Device drivers 6.1. New drivers Network drivers MediaTek MT7601U (USB) support ( mt7601u ), adds support for MT7601U-based wireless USB dongles (only in 64-bit ARM architecture) MediaTek MT76x0E (PCIe) support ( mt76x0e ), adds support for MT7610/MT7630-based wireless PCIe devices (only in 64-bit ARM architecture) MediaTek MT76x0U (USB) support ( mt76x0u ), adds support for MT7610U-based wireless USB 2.0 dongles (only in 64-bit ARM architecture) MediaTek MT76x2E (PCIe) support ( mt76x2e ), adds support for MT7612/MT7602/MT7662-based wireless PCIe devices (only in 64-bit ARM architecture) MediaTek MT76x2U (USB) support ( mt76x2u ), adds support for MT7612U-based wireless USB 3.0 dongles (only in 64-bit ARM architecture) MediaTek MT7921E (PCIe) support ( mt7921e ), adds support for MT7921E 802.11ax 2x2:2SS wireless devices (only in 64-bit ARM architecture) Atheros driver 802.11n HTC based wireless devices ( ath9k_htc ) (only in 64-bit ARM architecture) Broadcom 802.11n wireless LAN driver ( brcmsmac ) (only in 64-bit ARM architecture) Broadcom 802.11n wireless LAN driver utilities ( brcmutil ) (only in 64-bit ARM architecture) Broadcom 802.11 wireless LAN fullmac driver ( brcmfmac ) (only in 64-bit ARM architecture) Core module for Qualcomm Atheros 802.11ac wireless LAN cards ( ath10k_core ) (only in 64-bit ARM architecture) Core module for Qualcomm Atheros 802.11ax wireless LAN cards ( ath11k ) (only in 64-bit ARM architecture) Device simulator for WWAN framework ( wwan_hwsim ) Driver support for Qualcomm Atheros 802.11ac WLAN PCIe/AHB devices ( ath10k_pci ) (only in 64-bit ARM architecture) Driver support for Qualcomm Technologies 802.11ax WLAN PCIe devices ( ath11k_pci ) (only in 64-bit ARM architecture) Intel(R) Wireless Wi-Fi driver for Linux ( iwlwifi ) (only in 64-bit ARM architecture) Intel(R) Wireless Wi-Fi Link AGN driver for Linux ( iwldvm )- (only in 64-bit ARM architecture) IOSM Driver ( iosm ) Marvell WiFi-Ex Driver version 1.0 ( mwifiex ) (only in 64-bit ARM architecture) Marvell WiFi-Ex PCI-Express Driver version 1.0 ( mwifiex_pcie ) (only in 64-bit ARM architecture) Marvell WiFi-Ex SDIO Driver version 1.0 ( mwifiex_sdio ) (only in 64-bit ARM architecture) Marvell WiFi-Ex USB Driver version 1.0 ( mwifiex_usb ) (only in 64-bit ARM architecture) MediaTek PCIe 5G WWAN modem T7xx driver ( mtk_t7xx ) Network/MBIM over MHI ( mhi_wwan_mbim ) (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures) PCI basic driver for rtlwifi ( rtl_pci ) (only in 64-bit ARM architecture) Ralink RT2800 library version 2.3.0 ( rt2800lib ) (only in 64-bit ARM architecture) Ralink RT2800 PCI & PCMCIA Wireless LAN driver version 2.3.0 ( rt2800pci ) (only in 64-bit ARM architecture) Ralink RT2800 USB Wireless LAN driver version 2.3.0 ( rt2800usb ) (only in 64-bit ARM architecture) Realtek 802.11ac wireless 8821c driver ( rtw88_8821c ) (only in 64-bit ARM architecture) Realtek 802.11ac wireless 8821ce driver ( rtw88_8821ce ) (only in 64-bit ARM architecture) Realtek 802.11ac wireless 8822b driver ( rtw88_8822b ) (only in 64-bit ARM architecture) Realtek 802.11ac wireless 8822be driver ( rtw88_8822be ) (only in 64-bit ARM architecture) Realtek 802.11ac wireless 8822c driver ( rtw88_8822c ) - (only in 64-bit ARM architecture) Realtek 802.11ac wireless 8822ce driver ( rtw88_8822ce ) (only in 64-bit ARM architecture) Realtek 802.11ac wireless core module ( rtw88_core ) (only in 64-bit ARM architecture) Realtek 802.11ac wireless PCI driver ( rtw88_pci ) (only in 64-bit ARM architecture) Realtek 802.11ax wireless 8852A driver ( rtw89_8852a ) (only in 64-bit ARM architecture) Realtek 802.11ax wireless 8852AE driver ( rtw89_8852ae ) (only in 64-bit ARM architecture) Realtek 802.11ax wireless 8852B driver ( rtw89_8852b ) (only in 64-bit ARM architecture and AMD and Intel 64-bit architectures) Realtek 802.11ax wireless 8852BE driver ( rtw89_8852be ) (only in 64-bit ARM architecture and AMD and Intel 64-bit architectures) Realtek 802.11ax wireless core module ( rtw89_core ) (only in 64-bit ARM architecture) Realtek 802.11ax wireless PCI driver ( rtw89_pci ) (only in 64-bit ARM architecture) Realtek 802.11n PCI wireless core ( btcoexist ) (only in 64-bit ARM architecture) Realtek 802.11n PCI wireless core ( rtlwifi ) (only in 64-bit ARM architecture) Realtek 802.11n wireless 8723d driver ( rtw88_8723d ) (only in 64-bit ARM architecture) Realtek 802.11n wireless 8723de driver ( rtw88_8723de ) (only in 64-bit ARM architecture) Realtek 8188E 802.11n PCI wireless ( rtl8188ee ) (only in 64-bit ARM architecture) Realtek 8192C/8188C 802.11n PCI wireless ( rtl8192c-common ) (only in 64-bit ARM architecture) Realtek 8192C/8188C 802.11n PCI wireless ( rtl8192ce ) (only in 64-bit ARM architecture) Realtek 8192C/8188C 802.11n USB wireless ( rtl8192cu ) (only in 64-bit ARM architecture) Realtek 8192DE 802.11n Dual Mac PCI wireless ( rtl8192de ) (only in 64-bit ARM architecture) Realtek 8192EE 802.11n PCI wireless ( rtl8192ee ) (only in 64-bit ARM architecture) Realtek 8192S/8191S 802.11n PCI wireless ( rtl8192se ) (only in 64-bit ARM architecture) Realtek 8723BE 802.11n PCI wireless ( rtl8723be ) (only in 64-bit ARM architecture) Realtek 8723E 802.11n PCI wireless ( rtl8723ae ) (only in 64-bit ARM architecture) Realtek 8821ae 802.11ac PCI wireless ( rtl8821ae ) (only in 64-bit ARM architecture) Realtek RTL8723AE/RTL8723BE 802.11n PCI wireless common routines ( rtl8723-common ) (only in 64-bit ARM architecture) rt2800 MMIO library version 2.3.0 ( rt2800mmio ) (only in 64-bit ARM architecture) rt2x00 library version 2.3.0 ( rt2x00lib ) (only in 64-bit ARM architecture) rt2x00 mmio library version 2.3.0 ( rt2x00mmio ) (only in 64-bit ARM architecture) rt2x00 pci library version 2.3.0 ( rt2x00pci ) (only in 64-bit ARM architecture) rt2x00 usb library version 2.3.0 ( rt2x00usb ) (only in 64-bit ARM architecture) RTL8XXXu USB mac80211 Wireless LAN Driver ( rtl8xxxu ) (only in 64-bit ARM architecture) Shared library for Atheros wireless 802.11n LAN cards ( ath9k_common ) (only in 64-bit ARM architecture) Shared library for Atheros wireless LAN cards ( ath ) (only in 64-bit ARM architecture) Support for Atheros 802.11n wireless LAN cards ( ath9k_hw ) (only in 64-bit ARM architecture) Support for Atheros 802.11n wireless LAN cards ( ath9k ) (only in 64-bit ARM architecture) The new Intel(R) wireless AGN driver for Linux ( iwlmvm ) (only in 64-bit ARM architecture) Thunderbolt/USB4 network driver ( thunderbolt_net ) USB basic driver for rtlwifi ( rtl_usb ) (only in 64-bit ARM architecture) Graphics drivers and miscellaneous drivers Atheros AR30xx firmware driver 1.0 ( ath3k ) (only in 64-bit ARM architecture) BlueFRITZ! USB driver version 1.2 ( bfusb ) (only in 64-bit ARM architecture) Bluetooth HCI UART driver version 2.3 ( hci_uart ) (only in 64-bit ARM architecture) Bluetooth support for Broadcom devices version 0.1 ( btbcm ) (only in 64-bit ARM architecture) Bluetooth support for Intel devices version 0.1 ( btintel ) (only in 64-bit ARM architecture) Bluetooth support for MediaTek devices version 0.1 ( btmtk ) (only in 64-bit ARM architecture) Bluetooth support for Realtek devices version 0.1 ( btrtl ) (only in 64-bit ARM architecture) Bluetooth virtual HCI driver version 1.5 ( hci_vhci ) (only in 64-bit ARM architecture) Broadcom Blutonium firmware driver version 1.2 ( bcm203x ) (only in 64-bit ARM architecture) Digianswer Bluetooth USB driver version 0.11 ( bpa10x ) (only in 64-bit ARM architecture) Generic Bluetooth SDIO driver version 0.1 ( btsdio ) (only in 64-bit ARM architecture) Generic Bluetooth USB driver version 0.8 ( btusb ) (only in 64-bit ARM architecture) Marvell Bluetooth driver version 1.0 ( btmrvl ) (only in 64-bit ARM architecture) Marvell BT-over-SDIO driver version 1.0 ( btmrvl_sdio ) (only in 64-bit ARM architecture) Linux device driver of the BMC IPMI SSIF interface ( ssif_bmc ) (only in 64-bit ARM architecture) vTPM Driver version 0.1 ( tpm_vtpm_proxy ) AMD P-state driver Test module ( amd-pstate-ut ) (only in AMD and Intel 64-bit architectures) Compute Express Link (CXL) ACPI driver ( cxl_acpi ) (only in 64-bit ARM architecture and AMD and Intel 64-bit architectures) Compute Express Link (CXL) core driver ( cxl_core ) Compute Express Link (CXL) port driver ( cxl_port ) NVIDIA Tegra GPC DMA Controller driver ( tegra186-gpc-dma ) (only in 64-bit ARM architecture) DRM Buddy Allocator ( drm_buddy ) (only in 64-bit IBM Z architecture) DRM display adapter helper ( drm_display_helper ) (only in 64-bit IBM Z architecture) HID driver for EVision devices ( hid-evision ) (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures) Texas Instruments INA3221 HWMon Driver ( ina3221 ) (only in 64-bit ARM architecture) I3C core ( i3c ) (only in 64-bit ARM architecture) Silvaco dual-role I3C master driver ( svc-i3c-master ) (only in 64-bit ARM architecture) Microsoft Azure Network Adapter IB driver ( mana_ib ) (only in AMD and Intel 64-bit architectures) Soft RDMA transport ( rdma_rxe ) i.MX8MP interconnect driver - Generic interconnect drivers for i.MX SOCs ( imx8mp-interconnect ) (only in 64-bit ARM architecture) Linux USB Video Class ( uvc ) (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures) Common memory handling routines for videobuf2 ( videobuf2-memops ) (only in 64-bit ARM architecture) Device node registration for cec drivers ( cec ) (only in 64-bit IBM Z architecture) Device node registration for media drivers ( mc ) (only in 64-bit ARM architecture) Driver helper framework for Video for Linux 2 ( videobuf2-v4l2 ) (only in 64-bit ARM architecture) Media buffer core framework ( videobuf2-common ) (only in 64-bit ARM architecture) USB Video Class driver version 1.1.1 ( uvcvideo ) (only in 64-bit ARM architecture) V4L2 DV Timings Helper Functions ( v4l2-dv-timings ) (only in 64-bit ARM architecture) Video4Linux2 core driver ( videodev ) (only in 64-bit ARM architecture) vmalloc memory handling routines for videobuf2 ( videobuf2-vmalloc ) (only in 64-bit ARM architecture) Framework for SPI NOR ( spi-nor ) (only in 64-bit ARM architecture) Marvell CN10K DRAM Subsystem(DSS) PMU ( marvell_cn10k_ddr_pmu ) (only in 64-bit ARM architecture) Marvell CN10K LLC-TAD Perf driver ( marvell_cn10k_tad_pmu ) (only in 64-bit ARM architecture) Intel Meteor Lake PCH pinctrl/GPIO driver ( pinctrl-meteorlake ) (only in AMD and Intel 64-bit architectures) Intel In Field Scan (IFS) device ( intel_ifs ) (only in AMD and Intel 64-bit architectures) NVIDIA WMI EC Backlight driver ( nvidia-wmi-ec-backlight ) (only in AMD and Intel 64-bit architectures) QMI encoder/decoder helper ( qmi_helpers ) (only in 64-bit ARM architecture) AMD SoundWire driver ( soundwire-amd ) (only in AMD and Intel 64-bit architectures) NVIDIA Tegra114 SPI Controller Driver ( spi-tegra114 ) (only in 64-bit ARM architecture) STMicroelectronics STUSB160x Type-C controller driver ( stusb160x ) (only in 64-bit ARM architecture) MLX5 VFIO PCI - User Level meta-driver for MLX5 device family ( mlx5-vfio-pci ) 6.2. Updated drivers Network driver updates Realtek RTL8152/RTL8153 Based USB Ethernet Adapters ( r8152 ) has been updated to version v1.12.13 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Storage driver updates Broadcom MegaRAID SAS Driver ( megaraid_sas ) has been updated to version 07.725.01.00-rc1 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Driver for Microchip Smart Family Controller ( smartpqi ) has been updated to version 2.1.22-040 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Emulex LightPulse Fibre Channel SCSI driver ( lpfc ) has been updated to version 0:14.2.0.12 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). MPI3 Storage Controller Device Driver ( mpi3mr ) has been updated to version 8.4.1.0.0.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.3_release_notes/device_drivers
Chapter 13. Installing a three-node cluster on GCP
Chapter 13. Installing a three-node cluster on GCP In OpenShift Container Platform version 4.15, you can install a three-node cluster on Google Cloud Platform (GCP). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. 13.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 13.2. steps Installing a cluster on GCP with customizations Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_gcp/installing-gcp-three-node
Chapter 7. Deprecated functionality
Chapter 7. Deprecated functionality RHBA-2020:3591 sapconf is deprecated and has been replaced by the RHEL System Roles for SAP .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/8.x_release_notes/deprecated_functionality_8.x_release_notes
3.4. Converting to Boolean
3.4. Converting to Boolean JBoss Data Virtualization can automatically convert literal strings and numeric type values to Boolean values as follows: Table 3.3. Boolean Conversions Type Literal Value Boolean Value String 'false' false 'unknown' null other true Numeric 0 false other true
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/converting_to_boolean
Chapter 3. Configuring the Date and Time
Chapter 3. Configuring the Date and Time Modern operating systems distinguish between the following two types of clocks: A real-time clock ( RTC ), commonly referred to as a hardware clock , (typically an integrated circuit on the system board) that is completely independent of the current state of the operating system and runs even when the computer is shut down. A system clock , also known as a software clock , that is maintained by the kernel and its initial value is based on the real-time clock. Once the system is booted and the system clock is initialized, the system clock is completely independent of the real-time clock. The system time is always kept in Coordinated Universal Time ( UTC ) and converted in applications to local time as needed. Local time is the actual time in your current time zone, taking into account daylight saving time ( DST ). The real-time clock can use either UTC or local time. UTC is recommended. Red Hat Enterprise Linux 7 offers three command line tools that can be used to configure and display information about the system date and time: The timedatectl utility, which is new in Red Hat Enterprise Linux 7 and is part of systemd . The traditional date command. The hwclock utility for accessing the hardware clock. 3.1. Using the timedatectl Command The timedatectl utility is distributed as part of the systemd system and service manager and allows you to review and change the configuration of the system clock. You can use this tool to change the current date and time, set the time zone, or enable automatic synchronization of the system clock with a remote server. For information on how to display the current date and time in a custom format, see also Section 3.2, "Using the date Command" . 3.1.1. Displaying the Current Date and Time To display the current date and time along with detailed information about the configuration of the system and hardware clock, run the timedatectl command with no additional command line options: This displays the local and universal time, the currently used time zone, the status of the Network Time Protocol ( NTP ) configuration, and additional information related to DST. Example 3.1. Displaying the Current Date and Time The following is an example output of the timedatectl command on a system that does not use NTP to synchronize the system clock with a remote server: Important Changes to the status of chrony or ntpd will not be immediately noticed by timedatectl . If changes to the configuration or status of these tools is made, enter the following command: 3.1.2. Changing the Current Time To change the current time, type the following at a shell prompt as root : Replace HH with an hour, MM with a minute, and SS with a second, all typed in two-digit form. This command updates both the system time and the hardware clock. The result it is similar to using both the date --set and hwclock --systohc commands. The command will fail if an NTP service is enabled. See Section 3.1.5, "Synchronizing the System Clock with a Remote Server" to temporally disable the service. Example 3.2. Changing the Current Time To change the current time to 11:26 p.m., run the following command as root : By default, the system is configured to use UTC. To configure your system to maintain the clock in the local time, run the timedatectl command with the set-local-rtc option as root : To configure your system to maintain the clock in the local time, replace boolean with yes (or, alternatively, y , true , t , or 1 ). To configure the system to use UTC, replace boolean with no (or, alternatively, n , false , f , or 0 ). The default option is no . 3.1.3. Changing the Current Date To change the current date, type the following at a shell prompt as root : Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month. Note that changing the date without specifying the current time results in setting the time to 00:00:00. Example 3.3. Changing the Current Date To change the current date to 2 June 2017 and keep the current time (11:26 p.m.), run the following command as root : 3.1.4. Changing the Time Zone To list all available time zones, type the following at a shell prompt: To change the currently used time zone, type as root : Replace time_zone with any of the values listed by the timedatectl list-timezones command. Example 3.4. Changing the Time Zone To identify which time zone is closest to your present location, use the timedatectl command with the list-timezones command line option. For example, to list all available time zones in Europe, type: To change the time zone to Europe/Prague , type as root : 3.1.5. Synchronizing the System Clock with a Remote Server As opposed to the manual adjustments described in the sections, the timedatectl command also allows you to enable automatic synchronization of your system clock with a group of remote servers using the NTP protocol. Enabling NTP enables the chronyd or ntpd service, depending on which of them is installed. The NTP service can be enabled and disabled using a command as follows: To enable your system to synchronize the system clock with a remote NTP server, replace boolean with yes (the default option). To disable this feature, replace boolean with no . Example 3.5. Synchronizing the System Clock with a Remote Server To enable automatic synchronization of the system clock with a remote server, type: The command will fail if an NTP service is not installed. See Section 18.3.1, "Installing chrony" for more information. 3.2. Using the date Command The date utility is available on all Linux systems and allows you to display and configure the current date and time. It is frequently used in scripts to display detailed information about the system clock in a custom format. For information on how to change the time zone or enable automatic synchronization of the system clock with a remote server, see Section 3.1, "Using the timedatectl Command" . 3.2.1. Displaying the Current Date and Time To display the current date and time, run the date command with no additional command line options: This displays the day of the week followed by the current date, local time, abbreviated time zone, and year. By default, the date command displays the local time. To display the time in UTC, run the command with the --utc or -u command line option: You can also customize the format of the displayed information by providing the +" format " option on the command line: Replace format with one or more supported control sequences as illustrated in Example 3.6, "Displaying the Current Date and Time" . See Table 3.1, "Commonly Used Control Sequences" for a list of the most frequently used formatting options, or the date (1) manual page for a complete list of these options. Table 3.1. Commonly Used Control Sequences Control Sequence Description %H The hour in the HH format (for example, 17 ). %M The minute in the MM format (for example, 30 ). %S The second in the SS format (for example, 24 ). %d The day of the month in the DD format (for example, 16 ). %m The month in the MM format (for example, 09 ). %Y The year in the YYYY format (for example, 2016 ). %Z The time zone abbreviation (for example, CEST ). %F The full date in the YYYY-MM-DD format (for example, 2016-09-16 ). This option is equal to %Y-%m-%d . %T The full time in the HH:MM:SS format (for example, 17:30:24). This option is equal to %H:%M:%S Example 3.6. Displaying the Current Date and Time To display the current date and local time, type the following at a shell prompt: To display the current date and time in UTC, type the following at a shell prompt: To customize the output of the date command, type: 3.2.2. Changing the Current Time To change the current time, run the date command with the --set or -s option as root : Replace HH with an hour, MM with a minute, and SS with a second, all typed in two-digit form. By default, the date command sets the system clock to the local time. To set the system clock in UTC, run the command with the --utc or -u command line option: Example 3.7. Changing the Current Time To change the current time to 11:26 p.m., run the following command as root : 3.2.3. Changing the Current Date To change the current date, run the date command with the --set or -s option as root : Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month. Note that changing the date without specifying the current time results in setting the time to 00:00:00. Example 3.8. Changing the Current Date To change the current date to 2 June 2017 and keep the current time (11:26 p.m.), run the following command as root : 3.3. Using the hwclock Command hwclock is a utility for accessing the hardware clock, also referred to as the Real Time Clock (RTC). The hardware clock is independent of the operating system you use and works even when the machine is shut down. This utility is used for displaying the time from the hardware clock. hwclock also contains facilities for compensating for systematic drift in the hardware clock. The hardware clock stores the values of: year, month, day, hour, minute, and second. It is not able to store the time standard, local time or Coordinated Universal Time (UTC), nor set the Daylight Saving Time (DST). The hwclock utility saves its settings in the /etc/adjtime file, which is created with the first change you make, for example, when you set the time manually or synchronize the hardware clock with the system time. Note For the changes in the hwclock behaviour between Red Hat Enterprise Linux version 6 and 7, see Red Hat Enterprise Linux 7 Migration Planning Guide guide. 3.3.1. Displaying the Current Date and Time Running hwclock with no command line options as the root user returns the date and time in local time to standard output. Note that using the --utc or --localtime options with the hwclock command does not mean you are displaying the hardware clock time in UTC or local time. These options are used for setting the hardware clock to keep time in either of them. The time is always displayed in local time. Additionally, using the hwclock --utc or hwclock --local commands does not change the record in the /etc/adjtime file. This command can be useful when you know that the setting saved in /etc/adjtime is incorrect but you do not want to change the setting. On the other hand, you may receive misleading information if you use the command an incorrect way. See the hwclock (8) manual page for more details. Example 3.9. Displaying the Current Date and Time To display the current date and the current local time from the hardware clock, run as root : CEST is a time zone abbreviation and stands for Central European Summer Time. For information on how to change the time zone, see Section 3.1.4, "Changing the Time Zone" . 3.3.2. Setting the Date and Time Besides displaying the date and time, you can manually set the hardware clock to a specific time. When you need to change the hardware clock date and time, you can do so by appending the --set and --date options along with your specification: Replace dd with a day (a two-digit number), mmm with a month (a three-letter abbreviation), yyyy with a year (a four-digit number), HH with an hour (a two-digit number), MM with a minute (a two-digit number). At the same time, you can also set the hardware clock to keep the time in either UTC or local time by adding the --utc or --localtime options, respectively. In this case, UTC or LOCAL is recorded in the /etc/adjtime file. Example 3.10. Setting the Hardware Clock to a Specific Date and Time If you want to set the date and time to a specific value, for example, to "21:17, October 21, 2016", and keep the hardware clock in UTC, run the command as root in the following format: 3.3.3. Synchronizing the Date and Time You can synchronize the hardware clock and the current system time in both directions. Either you can set the hardware clock to the current system time by using this command: Note that if you use NTP, the hardware clock is automatically synchronized to the system clock every 11 minutes, and this command is useful only at boot time to get a reasonable initial system time. Or, you can set the system time from the hardware clock by using the following command: When you synchronize the hardware clock and the system time, you can also specify whether you want to keep the hardware clock in local time or UTC by adding the --utc or --localtime option. Similarly to using --set , UTC or LOCAL is recorded in the /etc/adjtime file. The hwclock --systohc --utc command is functionally similar to timedatectl set-local-rtc false and the hwclock --systohc --local command is an alternative to timedatectl set-local-rtc true . Example 3.11. Synchronizing the Hardware Clock with System Time To set the hardware clock to the current system time and keep the hardware clock in local time, run the following command as root : To avoid problems with time zone and DST switching, it is recommended to keep the hardware clock in UTC. The shown Example 3.11, "Synchronizing the Hardware Clock with System Time" is useful, for example, in case of a multi boot with a Windows system, which assumes the hardware clock runs in local time by default, and all other systems need to accommodate to it by using local time as well. It may also be needed with a virtual machine; if the virtual hardware clock provided by the host is running in local time, the guest system needs to be configured to use local time, too. 3.4. Additional Resources For more information on how to configure the date and time in Red Hat Enterprise Linux 7, see the resources listed below. Installed Documentation timedatectl (1) - The manual page for the timedatectl command line utility documents how to use this tool to query and change the system clock and its settings. date (1) - The manual page for the date command provides a complete list of supported command line options. hwclock (8) - The manual page for the hwclock command provides a complete list of supported command line options. See Also Chapter 2, System Locale and Keyboard Configuration documents how to configure the keyboard layout. Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Chapter 10, Managing Services with systemd provides more information on systemd and documents how to use the systemctl command to manage system services.
[ "timedatectl", "~]USD timedatectl Local time: Mon 2016-09-16 19:30:24 CEST Universal time: Mon 2016-09-16 17:30:24 UTC Timezone: Europe/Prague (CEST, +0200) NTP enabled: no NTP synchronized: no RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2016-03-31 01:59:59 CET Sun 2016-03-31 03:00:00 CEST Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2016-10-27 02:59:59 CEST Sun 2016-10-27 02:00:00 CET", "~]# systemctl restart systemd-timedated.service", "timedatectl set-time HH:MM:SS", "~]# timedatectl set-time 23:26:00", "timedatectl set-local-rtc boolean", "timedatectl set-time YYYY-MM-DD", "~]# timedatectl set-time \"2017-06-02 23:26:00\"", "timedatectl list-timezones", "timedatectl set-timezone time_zone", "~]# timedatectl list-timezones | grep Europe Europe/Amsterdam Europe/Andorra Europe/Athens Europe/Belgrade Europe/Berlin Europe/Bratislava ...", "~]# timedatectl set-timezone Europe/Prague", "timedatectl set-ntp boolean", "~]# timedatectl set-ntp yes", "date", "date --utc", "date +\"format\"", "~]USD date Mon Sep 16 17:30:24 CEST 2016", "~]USD date --utc Mon Sep 16 15:30:34 UTC 2016", "~]USD date +\"%Y-%m-%d %H:%M\" 2016-09-16 17:30", "date --set HH:MM:SS", "date --set HH:MM:SS --utc", "~]# date --set 23:26:00", "date --set YYYY-MM-DD", "~]# date --set \"2017-06-02 23:26:00\"", "hwclock", "~]# hwclock Tue 15 Apr 2017 04:23:46 PM CEST -0.329272 seconds", "hwclock --set --date \"dd mmm yyyy HH:MM\"", "~]# hwclock --set --date \"21 Oct 2016 21:17\" --utc", "hwclock --systohc", "hwclock --hctosys", "~]# hwclock --systohc --localtime" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/chap-configuring_the_date_and_time
Chapter 88. DockerOutput schema reference
Chapter 88. DockerOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the DockerOutput type from ImageStreamOutput . It must have the value docker for the type DockerOutput . Property Property type Description image string The full name which should be used for tagging and pushing the newly built image. For example quay.io/my-organization/my-custom-connect:latest . Required. pushSecret string Container Registry Secret with the credentials for pushing the newly built image. additionalKanikoOptions string array Configures additional options which will be passed to the Kaniko executor when building the new Connect image. Allowed options are: --customPlatform, --insecure, --insecure-pull, --insecure-registry, --log-format, --log-timestamp, --registry-mirror, --reproducible, --single-snapshot, --skip-tls-verify, --skip-tls-verify-pull, --skip-tls-verify-registry, --verbosity, --snapshotMode, --use-new-run. These options will be used only on OpenShift where the Kaniko executor is used. They will be ignored on OpenShift. The options are described in the Kaniko GitHub repository . Changing this field does not trigger new build of the Kafka Connect image. type string Must be docker .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-DockerOutput-reference
5.328. tar
5.328. tar 5.328.1. RHBA-2012:1372 - tar bug fix update Updated tar packages that fix one bug are now available for Red Hat Enterprise Linux 6. The tar packages provide the GNU tar program. Gnu tar can allows to save multiple files in one archive and can restore the files from that archive. This update fixes the following bug: Bug Fix BZ# 841308 Prior to this update, tar failed to match and extract given file names from an archive when this archive was created with the options "--sparse" and "--posix". This update modifies the underlying code to match and extract the given name as expected. All users of tar are advised to upgrade to these updated packages, which fix this bug. 5.328.2. RHBA-2012:0849 - tar bug fix and enhancement update Updated tar packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The GNU tar program can save multiple files in one archive and restore the files from that archive. Bug Fixes BZ# 653433 Before this update, tar could terminate with a segmentation fault and returned code 139. This happened when tar was used for incremental backup of the root directory with the option "--listed-incremental" (short option "-g") due to incorrect directory name resolution. The root directory name is now resolved correctly and the backup process succeeds in this scenario. BZ# 656834 The tar utility archived sparse files with long names (about 100 characters) incorrectly if run with the "--posix" and "--sparse" options (PAX mode). Such files were stored with misleading names inside the tar archive as there was not enough space allocated for the file names. Subsequent unpacking of the package resulted in confusing output file names. With this update, more space is now allocated for the file names in this scenario and the problem no longer occurs. BZ# 698212 If tar was run with the "--remove-files" option and the archived directory contained a file and a symbolic link pointing to the file, the file was deleted but not backed up. The archiving process terminated with an error. With this update, the file is archived as expected in this scenario. BZ# 726723 The tar unpacking process could enter an infinite loop and consume extensive CPU resources when run with the "--keep-old-files" option. This happened when unpacking an archive with symbolic links and the target of the symbolic link already existed. With this update, the code has been modified to handle symbolic links correctly in this scenario. BZ# 768724 The tar tool used the glibc fnmatch() function to match file names. However, the function failed to match a file name when the archived file name contained characters not supported by the default locale. Consequently, the file was not unpacked. With this update, tar uses the gnulib fnmatch() and the file name is matched as expected. BZ# 782628 If tar was run with the "--remove-files" option, it failed to remove the archived files when append mode was activated (the -r option). With this update, tar with the "--remove-files" option now calls the function that removes the files after they have been archived and the option works as expected. BZ# 688567 The tar tool failed to update the target archive when run with the "--update" and "--directory" options, returned the "Cannot stat: No such file or directory" error message, and the directory content was not archived. With this update, the tar command with the two options now works as expected. BZ# 799252 When extracting an archive with the "--keep-old-files" option, tar silently skipped already existing files. With this update, tar returns error code 2 and a warning in this scenario. Also, the "--skip-old-files" option has been added to allow the "--keep-old-files" behavior without returning errors for files that already exist. BZ# 807728 When run with the "--list" (-r) option, tar returned the "tar: write error" message, even though the execution succeeded. This happened if the command used redirection with a pipeline and the command following the redirection failed to process the entire tar command output. With this update, the spurious message is no longer returned in this scenario. Enhancement BZ# 760665 When archiving a sparse file containing 0 blocks of data, the archiving process experienced severe performance issues because tar was scanning the sparse file for non-existing data. With this update, a sparse file containing 0 blocks is detected by the stat() call and the archiving process is now faster for such files. All tar users are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/tar
15.25. Solving Common Replication Conflicts
15.25. Solving Common Replication Conflicts Multi-supplier replication uses an eventually-consistency replication model. This means that the same entries can be changed on different servers. When replication occurs between these two servers, the conflicting changes need to be resolved. Mostly, resolution occurs automatically, based on the time stamp associated with the change on each server. The most recent change takes precedence. However, there are some cases where conflicts require manual intervention in order to reach a resolution. Entries with a change conflict that cannot be resolved automatically by the replication process. To list conflict entries, enter: 15.25.1. Solving Naming Conflicts When two entries are created with the same DN on different servers, the automatic conflict resolution procedure during replication renames the last entry created, including the entry's unique identifier in the DN. Every directory entry includes a unique identifier stored in the nsuniqueid operational attribute. When a naming conflict occurs, this unique ID is appended to the non-unique DN. For example, if the uid= user_name ,ou=People,dc=example,dc=com entry was created on two different servers, replication adds the unique ID to the DN of the last entry created. This means, the following entries exist: uid= user_name ,ou=People,dc=example,dc=com nsuniqueid= 66446001-1dd211b2 +uid= user_name ,ou=People,dc=example,dc=com To resolve the replication conflict, you must manually decide how to proceed: To keep only the valid entry ( uid= user_name ,ou=People,dc=example,dc=com ) and delete the conflict entry, enter: To keep only the conflict entry ( nsuniqueid= 66446001-1dd211b2 +uid= user_name ,ou=People,dc=example,dc=com ), enter: To keep both entries, enter: To keep the conflict entry, you must specify a new Relative Distinguished Name (RDN) in the --new-rdn option to rename the conflict entry. The command renames the conflict entry to uid=user_name_NEW,ou=People,dc=example,dc=com . 15.25.2. Solving Orphan Entry Conflicts When a delete operation is replicated and the consumer server finds that the entry to be deleted has child entries, the conflict resolution procedure creates a glue entry to avoid having orphaned entries in the directory. In the same way, when an add operation is replicated and the consumer server cannot find the parent entry, the conflict resolution procedure creates a glue entry representing the parent so that the new entry is not an orphan entry. Glue entries are temporary entries that include the object classes glue and extensibleObject . Glue entries can be created in several ways: If the conflict resolution procedure finds a deleted entry with a matching unique identifier, the glue entry is a resurrection of that entry, with the addition of the glue object class and the nsds5ReplConflict attribute. In such cases, either modify the glue entry to remove the glue object class and the nsds5ReplConflict attribute to keep the entry as a normal entry or delete the glue entry and its child entries. The server creates a minimalistic entry with the glue and extensibleObject object classes. In such cases, modify the entry to turn it into a meaningful entry or delete it and all of its child entries. To list all glue entries: To delete a glue entry and its child entries: To convert a glue entry into a regular entry: 15.25.3. Resolving Errors for Obsolete or Missing Suppliers Information about the replication topology, that is all suppliers which supply updates to each other and other replicas within the same replication group, is contained in a set of metadata called the replica update vector (RUV) . The RUV contains information about the supplier such as its ID and URL, its latest change state number (CSN) on the local server, and the CSN of the first change. Both suppliers and consumers store RUV information, and they use it to control replication updates. When one supplier is removed from the replication topology, it may remain in another replica's RUV. When the other replica is restarted, it can record errors in its log, warning that the replication plug-in does not recognize the removed supplier. The errors will look similar to the following example: Note which replica and its ID; in this case, replica 8 . When the supplier is permanently removed from the topology, then any lingering metadata about that supplier should be purged from every other supplier's RUV entry. Use the cleanallruv directory task to remove a RUV entry from all suppliers in the topology. Note The cleanallruv task is replicated. Therefore, you only need to run it on one supplier. Procedure 15.1. Removing an Obsolete or Missing Supplier Using the cleanallruv Task Operation List all RUV records and replica IDs, both valid and invalid, as deleted suppliers may have left metadata on other suppliers: Note the returned replica IDs: 1 , 20 , 9 , and 8 . List the currently defined and valid replica IDs of all suppliers which are replicating databases by searching the replica configuration entries DN cn=replica under the cn=config suffix. Note Consumers and read-only nodes always have the replica ID set to 65535 , and nsDS5ReplicaType: 3 signifies a supplier. After you search all URIs returned in the first step (in this procedure, m1.example.com and m2.example.com ), compare the list of returned suppliers (entries which have nsDS5ReplicaType: 3 ) to the list of RUVs from the step. In the above example, this search only returned IDs 1 and 20 , but the search also returned 9 and 8 on URI m2.example.com . This means that the latter two are removed, and their RUVs need to be cleaned. After determining which RUVs require cleaning, create a new cn=cleanallruv,cn=tasks,cn=config entry and provide the following information about your replication configuration: The base DN of the replicated database ( replica-base-dn ) The replica ID ( replica-id ) Whether to catch up to the maximum change state number (CSN) from the missing supplier, or whether to just remove all RUV entries and miss any updates ( replica-force-cleaning ); setting this attribute to no means that the task will wait for all the configured replicas to catch up with all the changes from the removed replica first, and then remove the RUV. Note The cleanallruv task is replicated. Therefore, you only need to run it on one supplier. Repeat the same for every RUV you want to clean (ID 9 in this procedure). After cleaning the RUVs of all replicas discovered earlier, you can again use the search from the first step to verify that all extra RUVs are removed: As you can see in the above output, replica IDs 8 and 9 are no longer present, which indicates that their RUVs have been cleaned successfully.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-conflict list dc=example,dc=com", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-conflict delete nsuniqueid=66446001-1dd211b2+uid=user_name,ou=People,dc=example,dc=com", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-conflict swap nsuniqueid=66446001-1dd211b2+uid=user_name,ou=People,dc=example,dc=com", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-conflict convert --new-rdn= uid=user_name_NEW nsuniqueid=66446001-1dd211b2+uid=user_name,ou=People,dc=example,dc=com", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-conflict list-glue suffix", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-conflict delete-glue DN_of_glue_entry", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-conflict convert-glue DN_of_glue_entry", "[22/Jan/2021:17:16:01 -0500] NSMMReplicationPlugin - ruv_compare_ruv: RUV [changelog max RUV] does not contain element [{replica 8 ldap://m2.example.com:389} 4aac3e59000000080000 4c6f2a02000000080000] which is present in RUV [database RUV] <...several more samples...> [22/Jan/2021:17:16:01 -0500] NSMMReplicationPlugin - replica_check_for_data_reload: Warning: for replica dc=example,dc=com there were some differences between the changelog max RUV and the database RUV. If there are obsolete elements in the database RUV, you should remove them using the CLEANALLRUV task. If they are not obsolete, you should check their status to see why there are no changes from those servers in the changelog.", "ldapsearch -o ldif-wrap=no -xLLL -H m1.example.com -D \"cn=Directory Manager\" -W -b dc=example,dc=com '(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))' nsDS5ReplicaId nsDS5ReplicaType nsds50ruv dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaId: 1 nsDS5ReplicaType: 3 nsds50ruv: {replicageneration} 55d5093a000000010000 nsds50ruv: {replica 1 ldap://m1.example.com:389} 55d57026000000010000 55d57275000000010000 nsds50ruv: {replica 20 ldap://m2.example.com:389} 55e74b8c000000140000 55e74bf7000000140000 nsds50ruv: {replica 9 ldap://m2.example.com:389} nsds50ruv: {replica 8 ldap://m2.example.com:389} 506f921f000000080000 50774211000500080000", "ldapsearch -o ldif-wrap=no -xLLL -H m1.example.com m2.example.com -D \"cn=Directory Manager\" -W -b cn=config cn=replica nsDS5ReplicaId nsDS5ReplicaType dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaId: 1 nsDS5ReplicaType: 3 dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaId: 20 nsDS5ReplicaType: 3", "dsconf -D \"cn=Directory Manager\" ldap://m2.example.com repl-tasks cleanallruv --suffix=\" dc=example,dc=com \" --replica-id= 8", "ldapsearch -o ldif-wrap=no -xLLL -H m1.example.com -D \"cn=Directory Manager\" -W -b dc=example,dc=com '(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))' nsDS5ReplicaId nsDS5ReplicaType nsds50ruv dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaId: 1 nsDS5ReplicaType: 3 nsds50ruv: {replicageneration} 55d5093a000000010000 nsds50ruv: {replica 1 ldap://m1.example.com:389} 55d57026000000010000 55d57275000000010000 nsds50ruv: {replica 20 ldap://m2.example.com:389} 55e74b8c000000140000 55e74bf7000000140000" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/managing_replication-solving_common_replication_conflicts
Chapter 8. Flow control
Chapter 8. Flow control Flow control prevents producers and consumers from becoming overburdened by limiting the flow of data between them. AMQ Core Protocol JMS allows you to configure flow control for both consumers and producers. Consumer flow control Consumer flow control regulates the flow of data between the broker and the client as the client consumes messages from the broker. AMQ Core Protocol JMS buffers messages by default before delivering them to consumers. Without a buffer, the client would first need to request each message from the broker before consuming it. This type of "round-trip" communication is costly. Regulating the flow of data on the client side is important because out of memory issues can result when a consumer cannot process messages quickly enough and the buffer begins to overflow with incoming messages. Producer flow control In a similar way to consumer window-based flow control, the client can limit the amount of data sent from a producer to a broker to prevent the broker from being overburdened with too much data. In the case of a producer, the window size determines the number of bytes that can be in flight at any one time. 8.1. Setting the consumer window size The maximum size of messages held in the client-side buffer is determined by its window size . The default size of the window for AMQ Core Protocol JMS is 1 MiB, or 1024 * 1024 bytes. The default is fine for most use cases. For other cases, finding the optimal value for the window size might require benchmarking your system. AMQ Core Protocol JMS allows you to set the buffer window size if you need to change the default. The following examples show how to set the consumer window size parameter when using AMQ Core Protocol JMS. Each example sets the consumer window size to 300,000 bytes. Procedure If the client uses JNDI to instantiate its connection factory, include the consumerWindowSize parameter as part of the connection string URL. Store the URL within a JNDI context environment. The example below uses a jndi.properties file to store the URL. If the client does not use JNDI to instantiate its connection factory, pass a value to ActiveMQConnectionFactory.setConsumerWindowSize() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(300000); 8.2. Setting the producer window size The window size is negotiated between the broker and producer on the basis of credits, one credit for each byte in the window. As messages are sent and credits are used, the producer must request, and be granted, credits from the broker before it can send more messages. The exchange of credits between producer and broker regulates the flow of data between them. The following examples show how to set the producer window size to 1024 bytes when using AMQ Core Protocol JMS. Procedure If the client uses JNDI to instantiate its connection factory, include the producerWindowSize parameter as part of the connection string URL. Store the URL within a JNDI context environment. The example below uses a jndi.properties file to store the URL. If the client does not use JNDI to instantiate its connection factory, pass the value to ActiveMQConnectionFactory.setProducerWindowSize() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setProducerWindowSize(1024); 8.3. Handling fast consumers Fast consumers can process messages as fast as they consume them. If you are confident that the consumers in your messaging system are that fast, consider setting the window size to -1. Setting the window size to this value allows unbounded message buffering on the client. Use this setting with caution, however. Memory on the client can overflow if the consumer is not able to process messages as fast as it receives them. Setting the window size for fast consumers The examples below show how to set the window size to -1 when using a AMQ Core Protocol JMS client that is a fast consumer of messages. Procedure If the client uses JNDI to instantiate its connection factory, include the consumerWindowSize parameter as part of the connection string URL. Store the URL within a JNDI context environment. The example below uses a jndi.properties file to store the URL. If the client does not use JNDI to instantiate its connection factory, pass a value to ActiveMQConnectionFactory.setConsumerWindowSize() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(-1); 8.4. Handling slow consumers Slow consumers take significant time to process each message. In these cases, buffering messages on the client is not recommended. Messages remain on the broker ready to be consumed by other consumers instead. One benefit of turning off the buffer is that it provides deterministic distribution between multiple consumers on a queue. To handle slow consumers by disabling the client-side buffer, set the window size to 0. Setting the window size for slow consumers The examples below show how to set the window size to 0 when using a AMQ Core Protocol JMS client that is a slow consumer of messages. Procedure If the client uses JNDI to instantiate its connection factory, include the consumerWindowSize parameter as part of the connection string URL. Store the URL within a JNDI context environment. The example below uses a jndi.properties file to store the URL. If the client does not use JNDI to instantiate its connection factory, pass a value to ActiveMQConnectionFactory.setConsumerWindowSize() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(0); Additional resources See the example no-consumer-buffering in <install-dir> /examples/standard for an example that shows how to configure the broker to prevent consumer buffering when dealing with slow consumers. 8.5. Setting the rate of message consumption You can regulate the rate at which a consumer can consume messages. Also known as throttling , regulating the rate of consumption ensures that a consumer never consumes messages at a rate faster than configuration allows. Note Rate-limited flow control can be used in conjunction with window-based flow control. Rate-limited flow control affects only how many messages a client can consume per second and not how many messages are in its buffer. With a slow rate limit and a high window-based limit, the internal buffer of the client fills up with messages quickly. The rate must be a positive integer to enable this functionality and is the maximum desired message consumption rate specified in units of messages per second. Setting the rate to -1 disables rate-limited flow control. The default value is -1. The examples below show a client that limits the rate of consuming messages to 10 messages per second. Procedure If the client uses JNDI to instantiate its connection factory, include the consumerMaxRate parameter as part of the connection string URL. Store the URL within a JNDI context environment. The example below uses a jndi.properties file to store the URL. If the client does not use JNDI to instantiate its connection factory, pass the value to ActiveMQConnectionFactory.setConsumerMaxRate() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerMaxRate(10); Additional resources See the consumer-rate-limit example in <install-dir> /examples/standard for a working example of how to limit the consumer rate. 8.6. Setting the rate of message production AMQ Core Protocol JMS can also limit the rate at which a producer sends messages. The producer rate is specified in units of messages per second. Setting it to -1, the default, disables rate-limited flow control. The examples below show how to set the rate of sending messages when the producer is using AMQ Core Protocol JMS. Each example sets the maximum rate to 10 messages per second. Procedure If the client uses JNDI to instantiate its connection factory, include the producerMaxRate parameter as part of the connection string URL. Store the URL within a JNDI context environment. The example below uses a jndi.properties file to store the URL. If the client does not use JNDI to instantiate its connection factory, pass the value to ActiveMQConnectionFactory.setProducerMaxRate() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setProducerMaxRate(10); Additional resources See the producer-rate-limit example in <install-dir> /examples/standard for a working example of how to limit a the rate of sending messages.
[ "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?consumerWindowSize=300000", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(300000);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory java.naming.provider.url=tcp://localhost:61616?producerWindowSize=1024", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setProducerWindowSize(1024);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?consumerWindowSize=-1", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(-1);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?consumerWindowSize=0", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(0);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory java.naming.provider.url=tcp://localhost:61616?consumerMaxRate=10", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerMaxRate(10);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory java.naming.provider.url=tcp://localhost:61616?producerMaxRate=10", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setProducerMaxRate(10);" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_core_protocol_jms_client/flow_control
probe::udp.sendmsg.return
probe::udp.sendmsg.return Name probe::udp.sendmsg.return - Fires whenever an attempt to send a UDP message is completed Synopsis Values name The name of this probe size Number of bytes sent by the process Context The process which sent a UDP message
[ "udp.sendmsg.return" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-udp-sendmsg-return