title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
βŒ€
url
stringlengths
79
342
Installation configuration
Installation configuration OpenShift Container Platform 4.16 Cluster-wide configuration during installations Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installation_configuration/index
Chapter 7. Managing alerts
Chapter 7. Managing alerts In Red Hat OpenShift Service on AWS 4, the Alerting UI enables you to manage alerts, silences, and alerting rules. Alerting rules . Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Alerts . An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an Red Hat OpenShift Service on AWS cluster. Silences . A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the issue. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the cluster-admin role, you can access all alerts, silences, and alerting rules. 7.1. Accessing the Alerting UI from the Administrator perspective The Alerting UI is accessible through the Administrator perspective of the Red Hat OpenShift Service on AWS web console. From the Administrator perspective, go to Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting rules pages. 7.2. Accessing the Alerting UI from the Developer perspective The Alerting UI is accessible through the Developer perspective of the Red Hat OpenShift Service on AWS web console. From the Developer perspective, go to Observe and go to the Alerts tab. Select the project that you want to manage alerts for from the Project: list. In this perspective, alerts, silences, and alerting rules are all managed from the Alerts tab. The results shown in the Alerts tab are specific to the selected project. Note In the Developer perspective, you can select from core Red Hat OpenShift Service on AWS and user-defined projects that you have access to in the Project: <project_name> list. However, alerts, silences, and alerting rules relating to core Red Hat OpenShift Service on AWS projects are not displayed if you are not logged in as a cluster administrator. 7.3. Searching and filtering alerts, silences, and alerting rules You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options. 7.3.1. Understanding alert filters In the Administrator perspective, the Alerts page in the Alerting UI provides details about alerts relating to default Red Hat OpenShift Service on AWS and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown. You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option: State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Severity filters: Critical . The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team. Warning . The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review. Info . The alert is provided for informational purposes only. None . The alert has no defined severity. You can also create custom severity definitions for alerts relating to user-defined projects. Source filters: Platform . Platform-level alerts relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality. User . User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. 7.3.2. Understanding silence filters In the Administrator perspective, the Silences page in the Alerting UI provides details about silences applied to alerts in default Red Hat OpenShift Service on AWS and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends. You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option: State filters: Active . The silence is active and the alert will be muted until the silence is expired. Pending . The silence has been scheduled and it is not yet active. Expired . The silence has expired and notifications will be sent if the conditions for an alert are true. 7.3.3. Understanding alerting rule filters In the Administrator perspective, the Alerting rules page in the Alerting UI provides details about alerting rules relating to default Red Hat OpenShift Service on AWS and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule. You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option: Alert state filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Not Firing . The alert is not firing. Severity filters: Critical . The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team. Warning . The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review. Info . The alerting rule provides informational alerts only. None . The alerting rule has no defined severity. You can also create custom severity definitions for alerting rules relating to user-defined projects. Source filters: Platform . Platform-level alerting rules relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality. User . User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. 7.3.4. Searching and filtering alerts, silences, and alerting rules in the Developer perspective In the Developer perspective, the Alerts page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert. In this view, you can filter by alert state and severity. By default, all alerts in the selected project are displayed if you have permission to access the project. These filters are the same as those described for the Administrator perspective. 7.4. Getting information about alerts, silences, and alerting rules from the Administrator perspective The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts: From the Administrator perspective of the Red Hat OpenShift Service on AWS web console, go to the Observe Alerting Alerts page. Optional: Search for alerts by name by using the Name field in the search list. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerts by clicking one or more of the Name , Severity , State , and Source column headers. Click the name of an alert to view its Alert details page. The page includes a graph that illustrates alert time series data. It also provides the following information about the alert: A description of the alert Messages associated with the alert Labels attached to the alert A link to its governing alerting rule Silences for the alert, if any exist To obtain information about silences: From the Administrator perspective of the Red Hat OpenShift Service on AWS web console, go to the Observe Alerting Silences page. Optional: Filter the silences by name using the Search by name field. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied. Optional: Sort the silences by clicking one or more of the Name , Firing alerts , State , and Creator column headers. Select the name of a silence to view its Silence details page. The page includes the following details: Alert specification Start time End time Silence state Number and list of firing alerts To obtain information about alerting rules: From the Administrator perspective of the Red Hat OpenShift Service on AWS web console, go to the Observe Alerting Alerting rules page. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerting rules by clicking one or more of the Name , Severity , Alert state , and Source column headers. Select the name of an alerting rule to view its Alerting rule details page. The page provides the following details about the alerting rule: Alerting rule name, severity, and description. The expression that defines the condition for firing the alert. The time for which the condition should be true for an alert to fire. A graph for each alert governed by the alerting rule, showing the value with which the alert is firing. A table of all alerts governed by the alerting rule. 7.5. Getting information about alerts, silences, and alerting rules from the Developer perspective The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts, silences, and alerting rules: From the Developer perspective of the Red Hat OpenShift Service on AWS web console, go to the Observe <project_name> Alerts page. View details for an alert, silence, or an alerting rule: Alert details can be viewed by clicking a greater than symbol ( > ) to an alert name and then selecting the alert from the list. Silence details can be viewed by clicking a silence in the Silenced by section of the Alert details page. The Silence details page includes the following information: Alert specification Start time End time Silence state Number and list of firing alerts Alerting rule details can be viewed by clicking the menu to an alert in the Alerts page and then clicking View Alerting Rule . Note Only alerts, silences, and alerting rules relating to the selected project are displayed in the Developer perspective. Additional resources See the Cluster Monitoring Operator runbooks to help diagnose and resolve issues that trigger specific Red Hat OpenShift Service on AWS monitoring alerts. 7.6. Managing silences You can create a silence for an alert in the Red Hat OpenShift Service on AWS web console in both the Administrator and Developer perspectives. After you create a silence, you will not receive notifications about an alert when the alert fires. Creating silences is useful in scenarios where you have received an initial alert notification, and you do not want to receive further notifications during the time in which you resolve the underlying issue causing the alert to fire. When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires. After you create silences, you can view, edit, and expire them. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. Additional resources Configuring persistent storage 7.6.1. Silencing alerts from the Administrator perspective You can silence a specific alert or silence alerts that match a specification that you define. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To silence a specific alert: From the Administrator perspective of the Red Hat OpenShift Service on AWS web console, go to Observe Alerting Alerts . For the alert that you want to silence, click and select Silence alert to open the Silence alert page with a default configuration for the chosen alert. Optional: Change the default configuration details for the silence. Note You must add a comment before saving a silence. To save the silence, click Silence . To silence a set of alerts: From the Administrator perspective of the Red Hat OpenShift Service on AWS web console, go to Observe Alerting Silences . Click Create silence . On the Create silence page, set the schedule, duration, and label details for an alert. Note You must add a comment before saving a silence. To create silences for alerts that match the labels that you entered, click Silence . 7.6.2. Silencing alerts from the Developer perspective You can silence a specific alert or silence alerts that match a specification that you define. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure To silence a specific alert: From the Developer perspective of the Red Hat OpenShift Service on AWS web console, go to Observe and go to the Alerts tab. Select the project that you want to silence an alert for from the Project: list. If necessary, expand the details for the alert by clicking a greater than symbol ( > ) to the alert name. Click the alert message in the expanded view to open the Alert details page for the alert. Click Silence alert to open the Silence alert page with a default configuration for the alert. Optional: Change the default configuration details for the silence. Note You must add a comment before saving a silence. To save the silence, click Silence . To silence a set of alerts: From the Developer perspective of the Red Hat OpenShift Service on AWS web console, go to Observe and go to the Silences tab. Select the project that you want to silence alerts for from the Project: list. Click Create silence . On the Create silence page, set the duration and label details for an alert. Note You must add a comment before saving a silence. To create silences for alerts that match the labels that you entered, click Silence . 7.6.3. Editing silences from the Administrator perspective You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. Procedure From the Administrator perspective of the Red Hat OpenShift Service on AWS web console, go to Observe Alerting Silences . For the silence you want to modify, click and select Edit silence . Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence. On the Edit silence page, make changes and click Silence . Doing so expires the existing silence and creates one with the updated configuration. 7.6.4. Editing silences from the Developer perspective You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure From the Developer perspective of the Red Hat OpenShift Service on AWS web console, go to Observe and go to the Silences tab. Select the project that you want to edit silences for from the Project: list. For the silence you want to modify, click and select Edit silence . Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence. On the Edit silence page, make changes and click Silence . Doing so expires the existing silence and creates one with the updated configuration. 7.6.5. Expiring silences from the Administrator perspective You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. Procedure Go to Observe Alerting Silences . For the silence or silences you want to expire, select the checkbox in the corresponding row. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected. Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence. 7.6.6. Expiring silences from the Developer perspective You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure From the Developer perspective of the Red Hat OpenShift Service on AWS web console, go to Observe and go to the Silences tab. Select the project that you want to expire a silence for from the Project: list. For the silence or silences you want to expire, select the checkbox in the corresponding row. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected. Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence. 7.7. Creating alerting rules for user-defined projects In Red Hat OpenShift Service on AWS, you can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. If you create alerting rules for a user-defined project, consider the following key behaviors and important limitations when you define the new rules: A user-defined alerting rule can include metrics exposed by its own project in addition to the default metrics from core platform monitoring. You cannot include metrics from another user-defined project. For example, an alerting rule for the ns1 user-defined project can use metrics exposed by the ns1 project in addition to core platform metrics, such as CPU and memory metrics. However, the rule cannot include metrics from a different ns2 user-defined project. By default, when you create an alerting rule, the namespace label is enforced on it even if a rule with the same name exists in another project. To create alerting rules that are not bound to their project of origin, see "Creating cross-project alerting rules for user-defined projects". To reduce latency and to minimize the load on core platform monitoring components, you can add the openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus label to a rule. This label forces only the Prometheus instance deployed in the openshift-user-workload-monitoring project to evaluate the alerting rule and prevents the Thanos Ruler instance from doing so. Important If an alerting rule has this label, your alerting rule can use only those metrics exposed by your user-defined project. Alerting rules you create based on default platform metrics might not trigger alerts. 7.7.1. Optimizing alerting for user-defined projects You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules: Minimize the number of alerting rules that you create for your project . Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you. Create alerting rules for symptoms instead of causes . Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed. Plan before you write your alerting rules . Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom. Provide clear alert messaging . State the symptom and recommended actions in the alert message. Include severity levels in your alerting rules . The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team. 7.7.2. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Note To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job="prometheus-example-app"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5 1 The name of the alerting rule you want to create. 2 The duration for which the condition should be true before an alert is fired. 3 The PromQL query expression that defines the new rule. 4 The severity that alerting rule assigns to the alert. 5 The message associated with the alert. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml 7.7.3. Creating cross-project alerting rules for user-defined projects You can create alerting rules for user-defined projects that are not bound to their project of origin by configuring a project in the user-workload-monitoring-config config map. This allows you to create generic alerting rules that get applied to multiple user-defined projects instead of having individual PrometheusRule objects in each user project. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Note If you are a non-administrator user, you can still create cross-project alerting rules if you have the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. However, that project needs to be configured in the user-workload-monitoring-config config map under the namespacesWithoutLabelEnforcement property, which can be done only by cluster administrators. The user-workload-monitoring-config ConfigMap object exists. This object is created by default when the cluster is created. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Configure projects in which you want to create alerting rules that are not bound to a specific project: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 # ... 1 Specify one or more projects in which you want to create cross-project alerting rules. Prometheus and Thanos Ruler for user-defined monitoring do not enforce the namespace label in PrometheusRule objects created in these projects. Create a YAML file for alerting rules. In this example, it is called example-cross-project-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new cross-project alerting rule called example-security . The alerting rule fires when a user project does not enforce the restricted pod security policy: Example cross-project alerting rule apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: "ProjectNotEnforcingRestrictedPolicy" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~"(openshift|kube).*|default",label_pod_security_kubernetes_io_enforce!="restricted"} 4 annotations: message: "Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy." 5 labels: severity: warning 6 1 Ensure that you specify the project that you defined in the namespacesWithoutLabelEnforcement field. 2 The name of the alerting rule you want to create. 3 The duration for which the condition should be true before an alert is fired. 4 The PromQL query expression that defines the new rule. 5 The message associated with the alert. 6 The severity that alerting rule assigns to the alert. Important Ensure that you create a specific cross-project alerting rule in only one of the projects that you specified in the namespacesWithoutLabelEnforcement field. If you create the same cross-project alerting rule in multiple projects, it results in repeated alerts. Apply the configuration file to the cluster: USD oc apply -f example-cross-project-alerting-rule.yaml Additional resources Prometheus alerting documentation Monitoring overview 7.8. Managing alerting rules for user-defined projects In Red Hat OpenShift Service on AWS, you can view, edit, and remove alerting rules in user-defined projects. Important Managing alerting rules for user-defined projects is only available in Red Hat OpenShift Service on AWS version 4.11 and later. Alerting rule considerations The default alerting rules are used specifically for the Red Hat OpenShift Service on AWS cluster. Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both. Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing. 7.8.1. Accessing alerting rules for user-defined projects To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view cluster role for the project. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-view cluster role for your project. You have installed the OpenShift CLI ( oc ). Procedure To list alerting rules in <project> : USD oc -n <project> get prometheusrule To list the configuration of an alerting rule, run the following: USD oc -n <project> get prometheusrule <rule> -o yaml 7.8.2. Listing alerting rules for all projects in a single view As a dedicated-admin , you can list alerting rules for core Red Hat OpenShift Service on AWS and user-defined projects together in a single view. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure From the Administrator perspective of the Red Hat OpenShift Service on AWS web console, go to Observe Alerting Alerting rules . Select the Platform and User sources in the Filter drop-down menu. Note The Platform source is selected by default. 7.8.3. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> 7.8.4. Disabling cross-project alerting rules for user-defined projects Creating cross-project alerting rules for user-defined projects is enabled by default. Cluster administrators can disable the capability in the cluster-monitoring-config config map for the following reasons: To prevent user-defined monitoring from overloading the cluster monitoring stack. To prevent buggy alerting rules from being applied to the cluster without having to identify the rule that causes the issue. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config In the cluster-monitoring-config config map, disable the option to create cross-project alerting rules by setting the rulesWithoutLabelEnforcementAllowed value under data/config.yaml/userWorkload to false : kind: ConfigMap apiVersion: v1 metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | userWorkload: rulesWithoutLabelEnforcementAllowed: false # ... Save the file to apply the changes. Additional resources Alertmanager documentation 7.9. Sending notifications to external systems In Red Hat OpenShift Service on AWS 4, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure Red Hat OpenShift Service on AWS to send alerts to the following receiver types: PagerDuty Webhook Email Slack Microsoft Teams Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert Red Hat OpenShift Service on AWS monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 7.9.1. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts. 7.9.2. Configuring alert routing for user-defined projects If you are a non-administrator user who has been given the alert-routing-edit cluster role, you can create or edit alert routing for user-defined projects. Prerequisites Alert routing has been enabled for user-defined projects. You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml . Add an AlertmanagerConfig YAML definition to the file. For example: apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post Save the file. Apply the resource to the cluster: USD oc apply -f example-app-alert-routing.yaml The configuration is automatically applied to the Alertmanager pods. 7.10. Configuring Alertmanager to send notifications You can configure Alertmanager to send notifications by editing the alertmanager-user-workload secret for user-defined alerts. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration . 7.10.1. Configuring alert routing for user-defined projects with the Alertmanager secret If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload secret in the openshift-user-workload-monitoring namespace. Note All features of a supported version of upstream Alertmanager are also supported in an Red Hat OpenShift Service on AWS Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure Print the currently active Alertmanager configuration into the file alertmanager.yaml : USD oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : global: http_config: proxy_from_environment: true 1 route: receiver: Default group_by: - name: Default routes: - matchers: - "service = prometheus-example-monitor" 2 receiver: <receiver> 3 receivers: - name: Default - name: <receiver> <receiver_configuration> 4 1 If you configured an HTTP cluster-wide proxy, set the proxy_from_environment parameter to true to enable proxying for all alert receivers. 2 Specify labels to match your alerts. This example targets all alerts that have the service="prometheus-example-monitor" label. 3 Specify the name of the receiver to use for the alerts group. 4 Specify the receiver configuration. Apply the new configuration in the file: USD oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=- 7.11. Additional resources PagerDuty official site PagerDuty Prometheus Integration Guide Support version matrix for monitoring components
[ "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5", "oc apply -f example-app-alerting-rule.yaml", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 #", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: \"ProjectNotEnforcingRestrictedPolicy\" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~\"(openshift|kube).*|default\",label_pod_security_kubernetes_io_enforce!=\"restricted\"} 4 annotations: message: \"Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy.\" 5 labels: severity: warning 6", "oc apply -f example-cross-project-alerting-rule.yaml", "oc -n <project> get prometheusrule", "oc -n <project> get prometheusrule <rule> -o yaml", "oc -n <namespace> delete prometheusrule <foo>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "kind: ConfigMap apiVersion: v1 metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | userWorkload: rulesWithoutLabelEnforcementAllowed: false #", "apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post", "oc apply -f example-app-alert-routing.yaml", "oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "global: http_config: proxy_from_environment: true 1 route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 2 receiver: <receiver> 3 receivers: - name: Default - name: <receiver> <receiver_configuration> 4", "oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/monitoring/managing-alerts
Chapter 5. Creating Multus networks
Chapter 5. Creating Multus networks OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 5.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Requirements for Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. Note Network attachment definitions can only use the whereabouts IP address management (IPAM), and it must specify the range field. ipRanges and plugin chaining are not supported. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ).
[ "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.200.0/24\", \"routes\": [ {\"dst\": \"NODE_IP_CIDR\"} ] } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/creating-multus-networks_rhodf
2.7. DDL Commands
2.7. DDL Commands 2.7.1. DDL Commands JBoss Data Virtualization supports a subset of DDL to create/drop temporary tables and to manipulate procedure and view definitions at runtime. It is not currently possible to arbitrarily drop/create non-temporary metadata entries. See Section 11.1, "DDL Metadata" for DDL usage to define schemas within a VDB. Note A MetadataRepository must be configured to make a non-temporary metadata update persistent. See Runtime Metadata Updates in Red Hat JBoss Data Virtualization Development Guide: Server Development for more information. 2.7.2. Local and Global Temporary Tables Red Hat JBoss Data Virtualization supports creating temporary tables. Temporary tables are dynamically created, but are treated as any other physical table. 2.7.2.1. Local Temporary Tables Local temporary tables can be defined implicitly by referencing them in an INSERT statement or explicitly with a CREATE TABLE statement. Implicitly created temporary tables must have a name that starts with '#'. Creation syntax: Local temporary tables can be defined explicitly with a CREATE TABLE statement: Use the SERIAL data type to specify a NOT NULL and auto-incrementing INTEGER column. The starting value of a SERIAL column is 1. Local temporary tables can be defined implicitly by referencing them in an INSERT statement. Note If #name does not exist, it will be defined using the given column names and types from the value expressions. Note If #name does not exist, it will be defined using the target column names and the types from the query derived columns. If target columns are not supplied, the column names will match the derived column names from the query. Drop syntax: DROP TABLE name The following example is a series of statements that loads a temporary table with data from two sources, and with a manually inserted record, and then uses that temp table in a subsequent query. 2.7.2.2. Global Temporary Tables You can create global temporary tables in Teiid Designer or through the metadata you supply at deploy time. Unlike local temporary tables, you cannot create them at runtime. Your global temporary tables share a common definition through a schema entry. However, a new instance of the temporary table is created in each session. The table is then dropped when the session ends. (There is no explicit drop support.) A common use for a global temporary table is to pass results into and out of procedures. If you use the SERIAL data type, then each session's instance of the global temporary table will have its own sequence. You must explicitly specify UPDATABLE if you want to update the temporary table. 2.7.2.3. Common Features Here are the features of global and local temporary tables: Primary Key Support All key columns must be comparable. If you use a primary key, it will create a clustered index that supports search improvements for comparison , in , like , and order by . You can use Null as a primary key value, but there must only be one row that has an all-null key. Transaction Support THere is a READ_UNCOMMITED transaction isolation level. There are no locking mechanisms available to support higher isolation levels and the result of a rollback may be inconsistent across multiple transactions. If concurrent transactions are not associated with the same local temporary table or session, then the transaction isolation level is effectively serializable. If you want full consistency with local temporary tables, then only use a connection with 1 transaction at a time. This mode of operation is ensured by connection pooling that tracks connections by transaction. Limitations With the CREATE TABLE syntax only basic table definition (column name and type information) and an optional primary key are supported. For global temporary tables additional metadata in the create statement is effectively ignored when creating the temporary table instance - but may still be utilized by planning similar to any other table entry. You can use ON COMMIT PRESERVE ROWS . No other ON COMMIT clause is supported. You cannot use the "drop behavior" option in the drop statement. Temporary tables are not fail-over safe. Non-inlined LOB values (XML, CLOB, BLOB) are tracked by reference rather than by value in a temporary table. If you insert LOB values from external sources in your temporary table, they may become unreadable when the associated statement or connection is closed. 2.7.3. Foreign Temporary Tables Unlike a local temporary table, a foreign temporary table is a reference to an actual source table that is created at runtime rather than during the metadata load. A foreign temporary table requires explicit creation syntax: Where the table creation body syntax is the same as a standard CREATE FOREIGN TABLE DDL statement (see Section 11.1, "DDL Metadata" ). In general usage of DDL OPTION, clauses may be required to properly access the source table, including setting the name in source, updatability, native types, etc. The schema name must specify an existing schema/model in the VDB. The table will be accessed as if it is on that source, however within JBoss Data Virtualization the temporary table will still be scoped the same as a non-foreign temporary table. This means that the foreign temporary table will not belong to a JBoss Data Virtualization schema and will be scoped to the session or procedure block where created. The DROP syntax for a foreign temporary table is the same as for a non-foreign temporary table. Neither a CREATE nor a corresponding DROP of a foreign temporary table issue a pushdown command, rather this mechanism simply exposes a source table for use within JBoss Data Virtualization on a temporary basis. There are two usage scenarios for a FOREIGN TEMPORARY TABLE. The first is to dynamically access additional tables on the source. The other is to replace the usage of a JBoss Data Virtualization local temporary table for performance reasons. The usage pattern for the latter case would look like: Note the usage of the native procedure to pass source specific CREATE ddl to the source. JBoss Data Virtualization does not currently attempt to pushdown a source creation of a temporary table based upon the CREATE statement. Some other mechanism, such as the native procedure shown above, must be used to first create the table. Also note the table is explicitly marked as updatable, since DDL defined tables are not updatable by default. The source's handling of temporary tables must also be understood to make this work as intended. Sources that use the same GLOBAL table definition for all sessions while scoping the data to be session specific (such as Oracle) or sources that support session scoped temporary tables (such as PostgreSQL) will work if accessed under a transaction. A transaction is necessary because: the source on commit behavior (most likely DELETE ROWS or DROP) will ensure clean-up. Keep in mind that a JBoss Data Virtualization DROP does not issue a source command and is not guaranteed to occur (in some exception cases, loss of DB connectivity, hard shutdown, etc.). the source pool when using track connections by transaction will ensure that multiple uses of that source by JBoss Data Virtualization will use the same connection/session and thus the same temporary table and data. Note Since the ON COMMIT clause is not yet supported by JBoss Data Virtualization, it is important to consider that the source table ON COMMIT behavior will likely be different that the default, PRESERVE ROWS, for JBoss Data Virtualization local temporary tables. 2.7.4. Alter View Usage: Syntax Rules: The alter query expression may be prefixed with a cache hint for materialized view definitions. The hint will take effect the time the materialized view table is loaded. 2.7.5. Alter Procedure Usage: Syntax Rules: The alter block should not include 'CREATE VIRTUAL PROCEDURE' The alter block may be prefixed with a cache hint for cached procedures. 2.7.6. Create Trigger Usage: Syntax Rules: The target, name, must be an updatable view. An INSTEAD OF TRIGGER must not yet exist for the given event. Triggers are not yet true schema objects. They are scoped only to their view and have no name. Limitations: There is no corresponding DROP operation. See Section 2.7.7, "Alter Trigger" for enabling/disabling an existing trigger. 2.7.7. Alter Trigger Usage: Syntax Rules: The target, name, must be an updatable view. Triggers are not yet true schema objects. They are scoped only to their view and have no name. Update Procedures must already exist for the given trigger event. See Section 2.10.6, "Update Procedures" . Note If the default inherent update is chosen in Teiid Designer, any SQL associated with update (shown in a greyed out text box) is not part of the VDB and cannot be enabled with an alter trigger statement.
[ "CREATE LOCAL TEMPORARY TABLE name (column type [NOT NULL], ... [PRIMARY KEY (column, ...)])", "INSERT INTO #name (column, ...) VALUES (value, ...)", "INSERT INTO #name [(column, ...)] select c1, c2 from t", "CREATE LOCAL TEMPORARY TABLE TEMP (a integer, b integer, c integer); INSERT * INTO temp FROM Src1; INSERT * INTO temp FROM Src2; INSERT INTO temp VALUES (1,2,3); SELECT a,b,c FROM Src3, temp WHERE Src3.a = temp.b;", "CREATE GLOBAL TEMPORARY TABLE name (column type [NOT NULL], ... [PRIMARY KEY (column, ...)]) OPTIONS (UPDATABLE 'true')", "CREATE FOREIGN TEMPORARY TABLE name ... ON schema", "//- create the source table call source.native(\"CREATE GLOBAL TEMPORARY TABLE name IF NOT EXISTS ON COMMIT DELETE ROWS\"); //- bring the table into JBoss Data Virtualization CREATE FOREIGN TEMPORARY TABLE name ... OPTIONS (UPDATABLE true) //- use the table //- forget the table DROP TABLE name", "ALTER VIEW name AS queryExpression", "ALTER PROCEDURE name AS block", "CREATE TRIGGER ON name INSTEAD OF INSERT|UPDATE|DELETE AS FOR EACH ROW block", "ALTER TRIGGER ON name INSTEAD OF INSERT|UPDATE|DELETE (AS FOR EACH ROW block) | (ENABLED|DISABLED)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-DDL_Commands
function::ns_euid
function::ns_euid Name function::ns_euid - Returns the effective user ID of a target process as seen in a user namespace Synopsis Arguments None Description This function returns the effective user ID of the target process as seen in the target user namespace if provided, or the stap process namespace.
[ "ns_euid:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ns-euid
function::u64_arg
function::u64_arg Name function::u64_arg - Return function argument as unsigned 64-bit value Synopsis Arguments n index of argument to return Description Return the unsigned 64-bit value of argument n, same as ulonglong_arg.
[ "u64_arg:long(n:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-u64-arg
Part III. Using standalone perspectives in Business Central
Part III. Using standalone perspectives in Business Central As a business rules developer, you can embed standalone perspectives from Business Central in your web application and then use them to edit rules, processes, decision tables, and other assets. Prerequisites Business Central is deployed and is running on a web/application server. You are logged in to Business Central.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/assembly-using-standalone-perspectives
Chapter 3. Understanding Windows container workloads
Chapter 3. Understanding Windows container workloads Red Hat OpenShift support for Windows Containers provides built-in support for running Microsoft Windows Server containers on OpenShift Container Platform. For those that administer heterogeneous environments with a mix of Linux and Windows workloads, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). Note Multi-tenancy for clusters that have Windows nodes is not supported. Hostile multi-tenant usage introduces security concerns in all Kubernetes environments. Additional security features like pod security policies , or more fine-grained role-based access control (RBAC) for nodes, make exploits more difficult. However, if you choose to run hostile multi-tenant workloads, a hypervisor is the only security option you should use. The security domain for Kubernetes encompasses the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters. Windows Server Containers provide resource isolation using a shared kernel but are not intended to be used in hostile multitenancy scenarios. Scenarios that involve hostile multitenancy should use Hyper-V Isolated Containers to strongly isolate tenants. 3.1. Windows Machine Config Operator prerequisites The following information details the supported cloud provider versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. 3.1.1. Supported cloud providers based on OpenShift Container Platform and WMCO versions Cloud provider Supported OpenShift Container Platform version Supported WMCO version Amazon Web Services (AWS) 4.6+ WMCO 1.0+ Microsoft Azure 4.6+ WMCO 1.0+ VMware vSphere 4.7+ WMCO 2.0+ 3.1.2. Supported Windows Server versions The following table lists the supported Windows Server version based on the applicable cloud provider. Any unlisted Windows Server version is not supported and will cause errors. To prevent these errors, only use the appropriate version according to the cloud provider in use. Cloud provider Supported Windows Server version Amazon Web Services (AWS) Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 Microsoft Azure Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 VMware vSphere Windows Server Semi-Annual Channel (SAC): Windows Server 20H2 3.1.3. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your cloud provider. You must specify the network configuration when you install the cluster. Be aware that OpenShift SDN networking is the default network for OpenShift Container Platform clusters. However, OpenShift SDN is not supported by WMCO. Table 3.1. Cloud provider networking support Cloud provider Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port Table 3.2. Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 Custom VXLAN port Windows Server Semi-Annual Channel (SAC): Windows Server 20H2 3.1.4. Supported installation method The installer-provisioned infrastructure installation method is the only supported installation method. This is consistent across all supported cloud providers. User-provisioned infrastructure installation method is not supported. Additional resources See Configuring hybrid networking with OVN-Kubernetes 3.2. Windows workload management To run Windows workloads in your cluster, you must first install the Windows Machine Config Operator (WMCO). The WMCO is a Linux-based Operator that runs on Linux-based control plane and compute nodes. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Figure 3.1. WMCO design Before deploying Windows workloads, you must create a Windows compute node and have it join the cluster. The Windows node hosts the Windows workloads in a cluster, and can run alongside other Linux-based compute nodes. You can create a Windows compute node by creating a Windows machine set to host Windows Server compute machines. You must apply a Windows-specific label to the machine set that specifies a Windows OS image that has the Docker-formatted container runtime add-on enabled. Important Currently, the Docker-formatted container runtime is used in Windows nodes. Kubernetes is deprecating Docker as a container runtime; you can reference the Kubernetes documentation for more information in Docker deprecation . Containerd will be the new supported container runtime for Windows nodes in a future release of Kubernetes. The WMCO watches for machines with the Windows label. After a Windows machine set is detected and its respective machines are provisioned, the WMCO configures the underlying Windows virtual machine (VM) so that it can join the cluster as a compute node. Figure 3.2. Mixed Windows and Linux workloads The WMCO expects a predetermined secret in its namespace containing a private key that is used to interact with the Windows instance. WMCO checks for this secret during boot up time and creates a user data secret which you must reference in the Windows MachineSet object that you created. Then the WMCO populates the user data secret with a public key that corresponds to the private key. With this data in place, the cluster can connect to the Windows VM using an SSH connection. After the cluster establishes a connection with the Windows VM, you can manage the Windows node using similar practices as you would a Linux-based node. Note The OpenShift Container Platform web console provides most of the same monitoring capabilities for Windows nodes that are available for Linux nodes. However, the ability to monitor workload graphs for pods running on Windows nodes is not available at this time. Scheduling Windows workloads to a Windows node can be done with typical pod scheduling practices like taints, tolerations, and node selectors; alternatively, you can differentiate your Windows workloads from Linux workloads and other Windows-versioned workloads by using a RuntimeClass object. 3.3. Windows node services The following Windows-specific services are installed on each Windows node: Service Description kubelet Registers the Windows node and manages its status. Container Network Interface (CNI) plug-ins Exposes networking for Windows nodes. Windows Machine Config Bootstrapper (WMCB) Configures the kubelet and CNI plug-ins. Windows Exporter Exports Prometheus metrics from Windows nodes hybrid-overlay Creates the OpenShift Container Platform Host Network Service (HNS) . kube-proxy Maintains network rules on nodes allowing outside communication. 3.4. Known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Red Hat OpenShift Developer CLI (odo) Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat cost management Red Hat OpenShift Local Windows nodes do not support pulling container images from private registries. You can use images from public registries or pre-pull the images. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Windows nodes are not supported in clusters that use a cluster-wide proxy. This is because the WMCO is not able to route traffic through the proxy connection for the workloads. Windows nodes are not supported in clusters that are in a disconnected environment. Red Hat OpenShift support for Windows Containers supports only in-tree storage drivers for all cloud providers. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Pod termination grace periods require the containerd container runtime to be installed on the Windows node. Kubernetes has identified several API compatibility issues .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/windows_container_support_for_openshift/understanding-windows-container-workloads
Chapter 8. Triggering updates on image stream changes
Chapter 8. Triggering updates on image stream changes When an image stream tag is updated to point to a new image, OpenShift Container Platform can automatically take action to roll the new image out to resources that were using the old image. You configure this behavior in different ways depending on the type of resource that references the image stream tag. 8.1. OpenShift Container Platform resources OpenShift Container Platform deployment configurations and build configurations can be automatically triggered by changes to image stream tags. The triggered action can be run using the new value of the image referenced by the updated image stream tag. 8.2. Triggering Kubernetes resources Kubernetes resources do not have fields for triggering, unlike deployment and build configurations, which include as part of their API definition a set of fields for controlling triggers. Instead, you can use annotations in OpenShift Container Platform to request triggering. The annotation is defined as follows: apiVersion: v1 kind: Pod metadata: annotations: image.openshift.io/triggers: [ { "from": { "kind": "ImageStreamTag", 1 "name": "example:latest", 2 "namespace": "myapp" 3 }, "fieldPath": "spec.template.spec.containers[?(@.name==\"web\")].image", 4 "paused": false 5 }, # ... ] # ... 1 Required: kind is the resource to trigger from must be ImageStreamTag . 2 Required: name must be the name of an image stream tag. 3 Optional: namespace defaults to the namespace of the object. 4 Required: fieldPath is the JSON path to change. This field is limited and accepts only a JSON path expression that precisely matches a container by ID or index. For pods, the JSON path is spec.containers[?(@.name='web')].image . 5 Optional: paused is whether or not the trigger is paused, and the default value is false . Set paused to true to temporarily disable this trigger. When one of the core Kubernetes resources contains both a pod template and this annotation, OpenShift Container Platform attempts to update the object by using the image currently associated with the image stream tag that is referenced by trigger. The update is performed against the fieldPath specified. Examples of core Kubernetes resources that can contain both a pod template and annotation include: CronJobs Deployments StatefulSets DaemonSets Jobs ReplicationControllers Pods 8.3. Setting the image trigger on Kubernetes resources When adding an image trigger to deployments, you can use the oc set triggers command. For example, the sample command in this procedure adds an image change trigger to the deployment named example so that when the example:latest image stream tag is updated, the web container inside the deployment updates with the new image value. This command sets the correct image.openshift.io/triggers annotation on the deployment resource. Procedure Trigger Kubernetes resources by entering the oc set triggers command: USD oc set triggers deploy/example --from-image=example:latest -c web Example deployment with trigger annotation apiVersion: apps/v1 kind: Deployment metadata: annotations: image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"example:latest"},"fieldPath":"spec.template.spec.containers[?(@.name==\"container\")].image"}]' # ... Unless the deployment is paused, this pod template update automatically causes a deployment to occur with the new image value.
[ "apiVersion: v1 kind: Pod metadata: annotations: image.openshift.io/triggers: [ { \"from\": { \"kind\": \"ImageStreamTag\", 1 \"name\": \"example:latest\", 2 \"namespace\": \"myapp\" 3 }, \"fieldPath\": \"spec.template.spec.containers[?(@.name==\\\"web\\\")].image\", 4 \"paused\": false 5 }, # ]", "oc set triggers deploy/example --from-image=example:latest -c web", "apiVersion: apps/v1 kind: Deployment metadata: annotations: image.openshift.io/triggers: '[{\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"example:latest\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"container\\\")].image\"}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/images/triggering-updates-on-imagestream-changes
20.2.4. Using a Prepared FCP-attached SCSI Disk
20.2.4. Using a Prepared FCP-attached SCSI Disk Double-click Load . In the dialog box that follows, select SCSI as the Load type . As Load address fill in the device number of the FCP channel connected with the SCSI disk. As World wide port name fill in the WWPN of the storage system containing the disk as a 16-digit hexadecimal number. As Logical unit number fill in the LUN of the disk as a 16-digit hexadecimal number. As Boot program selector fill in the number corresponding the zipl boot menu entry that you prepared for booting the Red Hat Enterprise Linux installer. Leave the Boot record logical block address as 0 and the Operating system specific load parameters empty. Click the OK button.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-s390-steps-boot-Installing_in_an_LPAR-SCSI
23.5. Synchronizing the Clocks
23.5. Synchronizing the Clocks The phc2sys program is used to synchronize the system clock to the PTP hardware clock ( PHC ) on the NIC. The phc2sys service is configured in the /etc/sysconfig/phc2sys configuration file. The default setting in the /etc/sysconfig/phc2sys file is as follows: OPTIONS="-a -r" The -a option causes phc2sys to read the clocks to be synchronized from the ptp4l application. It will follow changes in the PTP port states, adjusting the synchronization between the NIC hardware clocks accordingly. The system clock is not synchronized, unless the -r option is also specified. If you want the system clock to be eligible to become a time source, specify the -r option twice. After making changes to /etc/sysconfig/phc2sys , restart the phc2sys service from the command line by issuing a command as root : Under normal circumstances, use service commands to start, stop, and restart the phc2sys service. When you do not want to start phc2sys as a service, you can start it from the command line. For example, enter the following command as root : The -a option causes phc2sys to read the clocks to be synchronized from the ptp4l application. If you want the system clock to be eligible to become a time source, specify the -r option twice. Alternately, use the -s option to synchronize the system clock to a specific interface's PTP hardware clock. For example: The -w option waits for the running ptp4l application to synchronize the PTP clock and then retrieves the TAI to UTC offset from ptp4l . Normally, PTP operates in the International Atomic Time ( TAI ) timescale, while the system clock is kept in Coordinated Universal Time ( UTC ). The current offset between the TAI and UTC timescales is 36 seconds. The offset changes when leap seconds are inserted or deleted, which typically happens every few years. The -O option needs to be used to set this offset manually when the -w is not used, as follows: Once the phc2sys servo is in a locked state, the clock will not be stepped, unless the -S option is used. This means that the phc2sys program should be started after the ptp4l program has synchronized the PTP hardware clock. However, with -w , it is not necessary to start phc2sys after ptp4l as it will wait for it to synchronize the clock. The phc2sys program can also be started as a service by running: When running as a service, options are specified in the /etc/sysconfig/phc2sys file. More information on the different phc2sys options can be found in the phc2sys(8) man page. Note that the examples in this section assume the command is run on a slave system or slave port.
[ "~]# service phc2sys restart", "~]# phc2sys -a -r", "~]# phc2sys -s eth3 -w", "~]# phc2sys -s eth3 -O -36", "~]# service phc2sys start" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-synchronizing_the_clocks
Chapter 112. AclRuleGroupResource schema reference
Chapter 112. AclRuleGroupResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleGroupResource type from AclRuleTopicResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value group for the type AclRuleGroupResource . Property Property type Description type string Must be group . name string Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. patternType string (one of [prefix, literal]) Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-aclrulegroupresource-reference
Chapter 3. Installing with the Assisted Installer UI
Chapter 3. Installing with the Assisted Installer UI After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster. 3.1. Pre-installation considerations Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices: Which base domain to use Which OpenShift Container Platform product version to install Whether to install a full cluster or single-node OpenShift Whether to use a DHCP server or a static network configuration Whether to use IPv4 or dual-stack networking Whether to install OpenShift Virtualization Whether to install Red Hat OpenShift Data Foundation Whether to install Multicluster Engine Whether to integrate with the platform when installing on vSphere or Nutanix Whether to install a mixed-cluster architecture Important If you intend to install any of the Operators, refer to the relevant hardware and storage requirements in Optional:Installing Operators . 3.2. Setting the cluster details To create a cluster with the Assisted Installer web user interface, use the following procedure. Procedure Log in to the Red Hat Hybrid Cloud Console . In the Red Hat OpenShift tile, click Scale your applications . In the menu, click Clusters . Click Create cluster . Click the Datacenter tab. Under Assisted Installer , click Create cluster . Enter a name for the cluster in the Cluster name field. Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain. Note The base domain must be a valid DNS name. You must not have a wild card domain set up for the base domain. Select the version of OpenShift Container Platform to install. Important For IBM Power and IBM zSystems platforms, only OpenShift Container Platform version 4.13 and later is supported. For a mixed-architecture cluster installation, select OpenShift Container Platform version 4.12 or later, and use the -multi option. For instructions on installing a mixed-architecture cluster, see Additional resources . Optional: Select Install single node Openshift (SNO) if you want to install OpenShift Container Platform on a single node. Note Currently, SNO is not supported on IBM zSystems and IBM Power platforms. Optional: The Assisted Installer already has the pull secret associated to your account. If you want to use a different pull secret, select Edit pull secret . Optional: If you are installing OpenShift Container Platform on a third-party platform, select the platform from the Integrate with external parter platforms list. Valid values are Nutanix , vSphere or Oracle Cloud Infrastructure . Assisted Installer defaults to having no platform integration. Note For details on each of the external partner integrations, see Additional Resources . Important Assisted Installer supports Oracle Cloud Infrastructure (OCI) integration from OpenShift Container Platform 4.14 and later. For OpenShift Container Platform 4.14, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features - Scope of Support . Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on a different architecture select the respective architecture to use. Valid values are arm64 , ppc64le , and s390x . Keep in mind, some features are not available with arm64 , ppc64le , and s390x CPU architectures. Important For a mixed-architecture cluster installation, use the default x86_64 architecture. For instructions on installing a mixed-architecture cluster, see Additional resources . Optional: The Assisted Installer defaults to DHCP networking. If you are using a static IP configuration, bridges or bonds for the cluster nodes instead of DHCP reservations, select Static IP, bridges, and bonds . Note A Static IP configuration is not supported for OpenShift Container Platform installations on Oracle Cloud Infrastructure (OCI). Optional: If you want to enable encryption of the installation disks, under Enable encryption of installation disks you can select Control plane node, worker for single-node OpenShift. For multi-node clusters, you can select Control plane nodes to encrypt the control plane node installation disks and select Workers to encrypt worker node installation disks. Important You cannot change the base domain, the SNO checkbox, the CPU architecture, the host's network configuration, or the disk-encryption after installation begins. Additional resources Optional: Installing on Nutanix Optional: Installing on vSphere 3.3. Optional: Configuring static networks The Assisted Installer supports IPv4 networking with SDN and OVN, and supports IPv6 and dual stack networking with OVN only. The Assisted Installer supports configuring the network with static network interfaces with IP address/MAC address mapping. The Assisted Installer also supports configuring host network interfaces with the NMState library, a declarative network manager API for hosts. You can use NMState to deploy hosts with static IP addressing, bonds, VLANs and other advanced networking features. First, you must set network-wide configurations. Then, you must create a host-specific configuration for each host. Note For installations on IBM Z with z/VM, ensure that the z/VM nodes and vSwitches are properly configured for static networks and NMState. Also, the z/VM nodes must have a fixed MAC address assigned as the pool MAC addresses might cause issues with NMState. Procedure Select the internet protocol version. Valid options are IPv4 and Dual stack . If the cluster hosts are on a shared VLAN, enter the VLAN ID. Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses. Enter the cluster network's IP address range in CIDR notation. Enter the default gateway IP address. Enter the DNS server IP address. Enter the host-specific configuration. If you are only setting a static IP address that uses a single network interface, use the form view to enter the IP address and the MAC address for each host. If you use multiple interfaces, bonding, or other advanced networking features, use the YAML view and enter the desired network state for each host that uses NMState syntax. Then, add the MAC address and interface name for each host interface used in your network configuration. Additional resources NMState version 2.1.4 3.4. Configuring Operators The Assisted Installer can install with certain Operators configured. The Operators include: OpenShift Virtualization Multicluster Engine (MCE) for Kubernetes OpenShift Data Foundation Logical Volume Manager (LVM) Storage Important For a detailed description of each of the Operators, together with hardware requirements, storage considerations, interdependencies, and additional installation instructions, see Additional Resources . This step is optional. You can complete the installation without selecting an Operator. Procedure To install OpenShift Virtualization, select Install OpenShift Virtualization . To install Multicluster Engine (MCE), select Install multicluster engine . To install OpenShift Data Foundation, select Install OpenShift Data Foundation . To install Logical Volume Manager, select Install Logical Volume Manager . Click to proceed to the step. Additional resources Installing the OpenShift Virtualization Operator Installing the Multicluster Engine (MCE) Operator Installing the OpenShift Data Foundation Operator 3.5. Adding hosts to the cluster You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent. Perform the following procedure for each host on the cluster. Procedure Click the Add hosts button and select the provisioning type. Select Minimal image file: Provision with virtual media to download a smaller image that will fetch the data needed to boot. The nodes must have virtual media capability. This is the recommended method for x86_64 and arm64 architectures. Select Full image file: Provision with physical media to download the larger full image. This is the recommended method for the ppc64le architecture and for the s390x architecture when installing with RHEL KVM. Select iPXE: Provision from your network server to boot the hosts using iPXE. Note If you install on RHEL KVM, in some circumstances, the VMs on the KVM host are not rebooted on first boot and need to be restarted manually. If you install OpenShift Container Platform on Oracle Cloud Infrastructure, select Minimal image file: Provision with virtual media only. Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Optional: Add an SSH public key so that you can connect to the cluster nodes as the core user. Having a login to the cluster nodes can provide you with debugging information during the installation. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access . In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates . Add additional certificates in X.509 format. Configure the discovery image if needed. Optional: If you are installing on a platform and want to integrate with the platform, select Integrate with your virtualization platform . You must boot all hosts and ensure they appear in the host inventory. All the hosts must be on the same platform. Click Generate Discovery ISO or Generate Script File . Download the discovery ISO or iPXE script. Boot the host(s) with the discovery image or iPXE script. Additional resources Configuring the discovery image for additional details. Booting hosts with the discovery image for additional details. Red Hat Enterprise Linux 9 - Configuring and managing virtualization for additional details. How to configure a VIOS Media Repository/Virtual Media Library for additional details. Adding hosts on Nutanix with the UI Adding hosts on vSphere 3.6. Configuring hosts After booting the hosts with the discovery ISO, the hosts will appear in the table at the bottom of the page. You can optionally configure the hostname and role for each host. You can also delete a host if necessary. Procedure From the Options (...) menu for a host, select Change hostname . If necessary, enter a new name for the host and click Change . You must ensure that each host has a valid and unique hostname. Alternatively, from the Actions list, select Change hostname to rename multiple selected hosts. In the Change Hostname dialog, type the new name and include {{n}} to make each hostname unique. Then click Change . Note You can see the new names appearing in the Preview pane as you type. The name will be identical for all selected hosts, with the exception of a single-digit increment per host. From the Options (...) menu, you can select Delete host to delete a host. Click Delete to confirm the deletion. Alternatively, from the Actions list, select Delete to delete multiple selected hosts at the same time. Then click Delete hosts . Note In a regular deployment, a cluster can have three or more hosts, and three of these must be control plane hosts. If you delete a host that is also a control plane, or if you are left with only two hosts, you will get a message saying that the system is not ready. To restore a host, you will need to reboot it from the discovery ISO. From the Options (...) menu for the host, optionally select View host events . The events in the list are presented chronologically. For multi-host clusters, in the Role column to the host name, you can click on the menu to change the role of the host. If you do not select a role, the Assisted Installer will assign the role automatically. The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements. Click the Status link to view hardware, network and operator validations for the host. Click the arrow to the left of a host name to expand the host details. Once all cluster hosts appear with a status of Ready , proceed to the step. 3.7. Configuring storage disks After discovering and configuring the cluster hosts, you can optionally configure the storage disks for each host. Any host configurations possible here are discussed in the Configuring Hosts section. See the additional resources below for the link. Procedure To the left of the checkbox to a host name, click to display the storage disks for that host. If there are multiple storage disks for a host, you can select a different disk to act as the installation disk. Click the Role dropdown list for the disk, and then select Installation disk . The role of the installation disk changes to None . All bootable disks are marked for reformatting during the installation by default, with the exception of read-only disks such as CDROMs. Deselect the Format checkbox to prevent a disk from being reformatted. The installation disk must be reformatted. Back up any sensitive data before proceeding. Once all disk drives appear with a status of Ready , proceed to the step. Additional resources Configuring hosts 3.8. Configuring networking Before installing OpenShift Container Platform, you must configure the cluster network. Procedure In the Networking page, select one of the following if it is not already selected for you: Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology, including keepalived and Virtual Router Redundancy Protocol (VRRP) for managing the API and Ingress VIP addresses. Note Currently, Cluster-Managed Networking is not supported on IBM zSystems and IBM Power in OpenShift Container Platform version 4.13. Oracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only. User-Managed Networking : Selecting user-managed networking allows you to deploy OpenShift Container Platform with a non-standard network topology. For example, if you want to deploy with an external load balancer instead of keepalived and VRRP, or if you intend to deploy the cluster nodes across many distinct L2 network segments. For cluster-managed networking, configure the following settings: Define the Machine network . You can use the default network or select a subnet. Define an API virtual IP . An API virtual IP provides an endpoint for all users to interact with, and configure the platform. Define an Ingress virtual IP . An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster. For user-managed networking, configure the following settings: Select your Networking stack type : IPv4 : Select this type when your hosts are only using IPv4. Dual-stack : You can select dual-stack when your hosts are using IPv4 together with IPv6. Define the Machine network . You can use the default network or select a subnet. Define an API virtual IP . An API virtual IP provides an endpoint for all users to interact with, and configure the platform. Define an Ingress virtual IP . An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster. Optional: You can select Allocate IPs via DHCP server to automatically allocate the API IP and Ingress IP using the DHCP server. Optional: Select Use advanced networking to configure the following advanced networking properties: Cluster network CIDR : Define an IP address block from which Pod IP addresses are allocated. Cluster network host prefix : Define a subnet prefix length to assign to each node. Service network CIDR : Define an IP address to use for service IP addresses. Network type : Select either Software-Defined Networking (SDN) for standard networking or Open Virtual Networking (OVN) for IPv6, dual-stack networking, and telco features. In OpenShift Container Platform 4.12 and later releases, OVN is the default Container Network Interface (CNI). Additional resources Network configuration 3.9. Pre-installation validation The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex post-installation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass pre-installation validation. Additional resources Pre-installation validation 3.10. Installing the cluster After you have completed the configuration and all the nodes are Ready , you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation. Procedure Press Begin installation . Click on the link in the Status column of the Host Inventory list to see the installation status of a particular host. 3.11. Completing the installation After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin username and password, and the kubeconfig file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses. Prerequisites You have installed the oc CLI tool. Procedure Make a copy of the kubeadmin username and password. Download the kubeconfig file and copy it to the auth directory under your working directory: USD mkdir -p <working_directory>/auth USD cp kubeadmin <working_directory>/auth Note The kubeconfig file is available for download for 24 hours after completing the installation. Add the kubeconfig file to your environment: USD export KUBECONFIG=<your working directory>/auth/kubeconfig Login with the oc CLI tool: USD oc login -u kubeadmin -p <password> Replace <password> with the password of the kubeadmin user. Click on the web console URL or click Launch OpenShift Console to open the console. Enter the kubeadmin username and password. Follow the instructions in the OpenShift Container Platform console to configure an identity provider and configure alert receivers. Add a bookmark of the OpenShift Container Platform console. Complete any post-installation platform integration steps. Additional resources Nutanix post-installation configuration vSphere post-installation configuration
[ "mkdir -p <working_directory>/auth", "cp kubeadmin <working_directory>/auth", "export KUBECONFIG=<your working directory>/auth/kubeconfig", "oc login -u kubeadmin -p <password>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/assisted_installer_for_openshift_container_platform/installing-with-ui
7.118. luci
7.118. luci 7.118.1. RHBA-2015:1454 - luci bug fix and enhancement update Updated luci packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The luci package provides a web-based high-availability cluster configuration application built on the TurboGears 2 framework. Bug Fixes BZ# 1136456 When editing the cluster configuration, if an error occurred while attempting to set the new configuration on one or more nodes, luci still attempted to activate the new configuration version. As a consequence, the cluster could fall out of sync. With this update, luci no longer activates a new cluster configuration in the described situation. BZ# 1010400 A new attribute, "cmd_prompt" has been added to the fence_apc fence agent. Consequently, users could not view and change this new attribute. The fence_apc form has been updated to include support for viewing and setting "cmd_prompt. BZ# 1111249 The "stop" action semantics differ from the "disable" action semantics in the rgmanager utility. Previously, after clicking the "stop" button in the GUI, luci always issued a command that caused the "disable" action to be issued in rgmanager. As a consequence, luci could not issue a command that would cause the rgmanager "stop" action to be issued for a service. This update adds a "stop" action in addition to the "disable" action that is accessible only in expert mode. BZ# 886526 After selecting "add resource" for a service group, a cancel button was missing from the dialog, which created a dead-end in the GUI. As a consequence, users had to reload the page if they clicked the button accidentally or wanted to change their choice after clicking it. This update adds a cancel button to the "add resource" dialog for service groups. BZ# 1100831 Previously, luci did not allow VM resources to have children resources, and after adding a VM to a service group, the "add resource" button was removed so that no further resources could be added. However, the GUI could handle configurations that contained resources with children. As a consequence, even though luci supported the aforementioned configurations, the "add resource" button was removed after adding a VM resource. With this update, the "add resource" button is no longer removed when adding a VM resource to a service group. BZ# 917781 The luci tool allowed setting the "shutdown_wait" attribute for postgres-8 resources, but the resource agent ignored the attribute. Consequently, it was not clear that "shutdown_wait" no longer had any effect. This update adds a text for clusters running Red Hat Enterprise Linux 6.2 and later to indicate that the "shutdown_wait" parameter is ignored. BZ# 1204910 Starting with Red Hat Enterprise Linux 6.7, fence_virt is fully supported. Previously, fence_virt was included as a Technology Preview, which was indicated by a label in the GUI. Also, certain labels and text regarding fence_xvm and fence_virt were inconsistent. With this update, the GUI text reflects the current support status for fence_virt and the the text is consistent. BZ# 1112297 When making changes to certain resources, service groups, and fence agents while not in expert mode, attributes that could be set with luci only in expert mode could be lost. As a consequence, some configuration parameters could be erroneously removed. With this update, luci no longer removes expert-mode-only attributes. Enhancements BZ# 1210683 Support for configuring the fence_emerson and fence_mpath fence devices has been added to luci. BZ# 919223 With this update, users can collapse and expand parts of service groups when viewing or editing service groups in luci, which improves the usability, as the configuration screen could previously become too cluttered. Users of luci are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-luci
function::user_int
function::user_int Name function::user_int - Retrieves an int value stored in user space. Synopsis Arguments addr The user space address to retrieve the int from. General Syntax user_int:long(addr:long) Description Returns the int value from a given user space address. Returns zero when user space data is not accessible.
[ "function user_int:long(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-int
Chapter 36. Real-time verification and validation of guided decision tables
Chapter 36. Real-time verification and validation of guided decision tables Business Central provides a real-time verification and validation feature for guided decision tables to ensure that your tables are complete and error free. Guided decision tables are validated after each cell change. If a problem in logic is detected, an error notification appears and describes the problem. 36.1. Types of problems in guided decision tables The validation and verification feature detects the following types of problems: Redundancy Redundancy occurs when two rows in a decision table execute the same consequences for the same set of facts. For example, two rows checking a client's birthday and providing a birthday discount may result in double discount. Subsumption Subsumption is similar to redundancy and occurs when two rules execute the same consequences, but one executes on a subset of facts of the other. For example, consider these two rules: when Person age > 10 then Increase Counter when Person age > 20 then Increase Counter In this case, if a person is 15 years old, only one rule fires and if a person is 20 years old, both rules fire. Such cases cause similar trouble during runtime as redundancy. Conflicts A conflicting situation occurs when two similar conditions have different consequences. Conflicts can occur between two rows (rules) or two cells in a decision table. The following example illustrates conflict between two rows in a decision table: when Deposit > 20000 then Approve Loan when Deposit > 20000 then Refuse Loan In this case, there is no way to know if the loan will be approved or not. The following example illustrates conflict between two cells in a decision table: when Age > 25 when Age < 25 A row with conflicting cells never executes. Broken Unique Hit Policy When the Unique Hit policy is applied to a decision table, only one row at a time can be executed and each row must be unique, with no overlap of conditions being met. If more than one row is executed, then the verification report identifies the broken hit policy. For example, consider the following conditions in a table that determines eligibility for a price discount: when Is Student = true when Is Military = true If a customer is both a student and in the military, both conditions apply and break the Unique Hit policy. Rows in this type of table must therefore be created in a way that does not allow multiple rules to fire at one time. For details about hit policies, see Chapter 28, Hit policies for guided decision tables . Deficiency Deficiency is similar to a conflict and occurs the logic of a rule in a decision table is incomplete. For example, consider the following two deficient rules: when Age > 20 then Approve Loan when Deposit < 20000 then Refuse Loan These two rules may lead to confusion for a person who is over 20 years old and has deposited less than 20000. You can add more constraints to avoid the conflict. Missing Columns When deleted columns result in incomplete or incorrect logic, rules cannot fire properly. This is detected so that you can address the missing columns, or adjust the logic to not rely on intentionally deleted conditions or actions. Incomplete Ranges Ranges of field values are incomplete if a table contains constraints against possible field values but does not define all possible values. The verification report identifies any incomplete ranges provided. For example, if your table has a check for if an application is approved, the verification report reminds you to make sure you also handle situations where the application was not approved. 36.2. Types of notifications The verification and validation feature uses three types of notifications: Error: A serious problem that may lead to the guided decision table failing to work as designed at run time. Conflicts, for example, are reported as errors. Warning: Likely a serious problem that may not prevent the guided decision table from working but requires attention. Subsumptions, for example, are reported as warnings. Information: A moderate or minor problem that may not prevent the guided decision table from working but requires attention. Missing columns, for example, are reported as information. Note Business Central verification and validation does not prevent you from saving an incorrect change. The feature only reports issues while editing and you can still continue to overlook those and save your changes. 36.3. Disabling verification and validation of guided decision tables The decision table verification and validation feature of Business Central is enabled by default. This feature helps you validate your guided decision tables, but with complex guided decision tables, this feature can hinder decision engine performance. You can disable this feature by setting the org.kie.verification.disable-dtable-realtime-verification system property value to true in your Red Hat Decision Manager distribution. Procedure Navigate to ~/standalone-full.xml and add the following system property: For example, on Red Hat JBoss EAP, you add this system property in USDEAP_HOME/standalone/configuration/standalone-full.xml .
[ "<property name=\"org.kie.verification.disable-dtable-realtime-verification\" value=\"true\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/guided-decision-tables-validation-con
Chapter 4. Deploy standalone Multicloud Object Gateway
Chapter 4. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 4.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 4.3. Creating standalone Multicloud Object Gateway on IBM Power You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. To identify storage devices on each node, refer to Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace, click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for filesystem PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option for Backing storage type . Select the Storage Class that you used while installing LocalVolume. Click . Optional: In the Security page, select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) noobaa-default-backing-store-noobaa-pod-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_power/deploy-standalone-multicloud-object-gateway-ibm-power
Chapter 6. Migrating application workloads
Chapter 6. Migrating application workloads You can migrate application workloads from the internal mode storage classes to the external mode storage classes using Migration Toolkit for Containers using the same cluster as source and target.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_multiple_openshift_data_foundation_storage_clusters/proc_migrating-application-workloads_rhodf
Chapter 1. Fixed issues
Chapter 1. Fixed issues 1.1. AMQ JMS ENTMQCL-2337 - Failure decoding large messages with special characters In earlier releases of the product, the client failed to decode messages containing multi-byte characters if the characters spanned transfer frames. In this release, large messages with multi-byte characters are correctly decoded. ENTMQCL-2339 - The client can ignore transactions right after failover In earlier releases of the product, the client sometimes failed to mark a message transfer as belonging to a transaction directly after connection failover. In this release, transactions are correctly recovered after failover. 1.2. AMQ .NET ENTMQCL-2241 - Client example sends empty begin frames In earlier releases of the product, the ReconnectSender example program sent an AMQP begin frame lacking required fields. In this release, the example correctly provides the mandatory fields. 1.3. AMQ C++ ENTMQCL-1863 - On receiving a forced close, send a corresponding protocol close and close the socket In earlier releases of the product, when the client received a forced AMQP connection close from the remote peer, it then closed its local TCP socket without sending a matching AMQP close in return. In this release, the client sends an AMQP close before closing the socket.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/amq_clients_2.9_release_notes/fixed_issues
Chapter 18. Creating DRL rules in Business Central
Chapter 18. Creating DRL rules in Business Central You can create and manage DRL rules for your project in Business Central. In each DRL rule file, you define rule conditions, actions, and other components related to the rule, based on the data objects you create or import in the package. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset DRL file . Enter an informative DRL file name and select the appropriate Package . The package that you specify must be the same package where the required data objects have been assigned or will be assigned. You can also select Show declared DSL sentences if any domain specific language (DSL) assets have been defined in your project. These DSL assets will then become usable objects for conditions and actions that you define in the DRL designer. Click Ok to create the rule asset. The new DRL file is now listed in the DRL panel of the Project Explorer , or in the DSLR panel if you selected the Show declared DSL sentences option. The package to which you assigned this DRL file is listed at the top of the file. In the Fact types list in the left panel of the DRL designer, confirm that all data objects and data object fields (expand each) required for your rules are listed. If not, you can either import relevant data objects from other packages by using import statements in the DRL file, or create data objects within your package. After all data objects are in place, return to the Model tab of the DRL designer and define the DRL file with any of the following components: Components in a DRL file package : (automatic) This was defined for you when you created the DRL file and selected the package. import : Use this to identify the data objects from either this package or another package that you want to use in the DRL file. Specify the package and data object in the format packageName.objectName , with multiple imports on separate lines. Importing data objects function : (optional) Use this to include a function to be used by rules in the DRL file. Functions in DRL files put semantic code in your rule source file instead of in Java classes. Functions are especially useful if an action ( then ) part of a rule is used repeatedly and only the parameters differ for each rule. Above the rules in the DRL file, you can declare the function or import a static method from a helper class as a function, and then use the function by name in an action ( then ) part of the rule. Declaring and using a function with a rule (option 1) Importing and using the function with a rule (option 2) query : (optional) Use this to search the decision engine for facts related to the rules in the DRL file. You add the query definitions in DRL files and then obtain the matching results in your application code. Queries search for a set of defined conditions and do not require when or then specifications. Query names are global to the KIE base and therefore must be unique among all other rule queries in the project. To return the results of a query, construct a traditional QueryResults definition using ksession.getQueryResults("name") , where "name" is the query name. This returns a list of query results, which enable you to retrieve the objects that matched the query. Define the query and query results parameters above the rules in the DRL file. Example query definition in a DRL file Example application code to obtain query results QueryResults results = ksession.getQueryResults( "people under the age of 21" ); System.out.println( "we have " + results.size() + " people under the age of 21" ); declare : (optional) Use this to declare a new fact type to be used by rules in the DRL file. The default fact type in the java.lang package of Red Hat Decision Manager is Object , but you can declare other types in DRL files as needed. Declaring fact types in DRL files enables you to define a new fact model directly in the decision engine, without creating models in a lower-level language like Java. Declaring and using a new fact type global : (optional) Use this to include a global variable to be used by rules in the DRL file. Global variables typically provide data or services for the rules, such as application services used in rule consequences, and return data from rules, such as logs or values added in rule consequences. Set the global value in the working memory of the decision engine through a KIE session configuration or REST operation, declare the global variable above the rules in the DRL file, and then use it in an action ( then ) part of the rule. For multiple global variables, use separate lines in the DRL file. Setting the global list configuration for the decision engine Defining the global list in a rule Warning Do not use global variables to establish conditions in rules unless a global variable has a constant immutable value. Global variables are not inserted into the working memory of the decision engine, so the decision engine cannot track value changes of variables. Do not use global variables to share data between rules. Rules always reason and react to the working memory state, so if you want to pass data from rule to rule, assert the data as facts into the working memory of the decision engine. rule : Use this to define each rule in the DRL file. Rules consist of a rule name in the format rule "name" , followed by optional attributes that define rule behavior (such as salience or no-loop ), followed by when and then definitions. Each rule must have a unique name within the rule package. The when part of the rule contains the conditions that must be met to execute an action. For example, if a bank requires loan applicants to have over 21 years of age, then the when condition for an "Underage" rule would be Applicant( age < 21 ) . The then part of the rule contains the actions to be performed when the conditional part of the rule has been met. For example, when the loan applicant is under 21 years old, the then action would be setApproved( false ) , declining the loan because the applicant is under age. Rule for loan application age limit At a minimum, each DRL file must specify the package , import , and rule components. All other components are optional. The following is an example DRL file in a loan application decision service: Example DRL file for a loan application Figure 18.1. Example DRL file for a loan application in Business Central After you define all components of the rule, click Validate in the upper-right toolbar of the DRL designer to validate the DRL file. If the file validation fails, address any problems described in the error message, review all syntax and components in the DRL file, and try again to validate the file until the file passes. Click Save in the DRL designer to save your work. 18.1. Adding WHEN conditions in DRL rules The when part of the rule contains the conditions that must be met to execute an action. For example, if a bank requires loan applicants to have over 21 years of age, then the when condition of an "Underage" rule would be Applicant( age < 21 ) . Conditions consist of a series of stated patterns and constraints, with optional bindings and other supported DRL elements, based on the available data objects in the package. Prerequisites The package is defined at the top of the DRL file. This should have been done for you when you created the file. The import list of data objects used in the rule is defined below the package line of the DRL file. Data objects can be from this package or from another package in Business Central. The rule name is defined in the format rule "name" below the package , import , and other lines that apply to the entire DRL file. The same rule name cannot be used more than once in the same package. Optional rule attributes (such as salience or no-loop ) that define rule behavior are below the rule name, before the when section. Procedure In the DRL designer, enter when within the rule to begin adding condition statements. The when section consists of zero or more fact patterns that define conditions for the rule. If the when section is empty, then the conditions are considered to be true and the actions in the then section are executed the first time a fireAllRules() call is made in the decision engine. This is useful if you want to use rules to set up the decision engine state. Example rule without conditions Enter a pattern for the first condition to be met, with optional constraints, bindings, and other supported DRL elements. A basic pattern format is <patternBinding> : <patternType> ( <constraints> ) . Patterns are based on the available data objects in the package and define the conditions to be met in order to trigger actions in the then section. Simple pattern: A simple pattern with no constraints matches against a fact of the given type. For example, the following condition is only that the applicant exists. Pattern with constraints: A pattern with constraints matches against a fact of the given type and the additional restrictions in parentheses that are true or false. For example, the following condition is that the applicant is under the age of 21. Pattern with binding: A binding on a pattern is a shorthand reference that other components of the rule can use to refer back to the defined pattern. For example, the following binding a on LoanApplication is used in a related action for underage applicants. Continue defining all condition patterns that apply to this rule. The following are some of the keyword options for defining DRL conditions: and : Use this to group conditional components into a logical conjunction. Infix and prefix and are supported. By default, all listed patterns are combined with and when no conjunction is specified. or : Use this to group conditional components into a logical disjunction. Infix and prefix or are supported. exists : Use this to specify facts and constraints that must exist. This option is triggered on only the first match, not subsequent matches. If you use this element with multiple patterns, enclose the patterns with parentheses () . not : Use this to specify facts and constraints that must not exist. forall : Use this to verify whether all facts that match the first pattern match all the remaining patterns. When a forall construct is satisfied, the rule evaluates to true . from : Use this to specify a data source for a pattern. entry-point : Use this to define an Entry Point corresponding to a data source for the pattern. Typically used with from . collect : Use this to define a collection of objects that the rule can use as part of the condition. In the example, all pending applications in the decision engine for each given mortgage are grouped in a List . If three or more pending applications are found, the rule is executed. accumulate : Use this to iterate over a collection of objects, execute custom actions for each of the elements, and return one or more result objects (if the constraints evaluate to true ). This option is a more flexible and powerful form of collect . Use the format accumulate( <source pattern>; <functions> [;<constraints>] ) . In the example, min , max , and average are accumulate functions that calculate the minimum, maximum, and average temperature values over all the readings for each sensor. Other supported functions include count , sum , variance , standardDeviation , collectList , and collectSet . Note For more information about DRL rule conditions, see Section 16.8, "Rule conditions in DRL (WHEN)" . After you define all condition components of the rule, click Validate in the upper-right toolbar of the DRL designer to validate the DRL file. If the file validation fails, address any problems described in the error message, review all syntax and components in the DRL file, and try again to validate the file until the file passes. Click Save in the DRL designer to save your work. 18.2. Adding THEN actions in DRL rules The then part of the rule contains the actions to be performed when the conditional part of the rule has been met. For example, when a loan applicant is under 21 years old, the then action of an "Underage" rule would be setApproved( false ) , declining the loan because the applicant is under age. Actions consist of one or more methods that execute consequences based on the rule conditions and on available data objects in the package. The main purpose of rule actions is to insert, delete, or modify data in the working memory of the decision engine. Prerequisites The package is defined at the top of the DRL file. This should have been done for you when you created the file. The import list of data objects used in the rule is defined below the package line of the DRL file. Data objects can be from this package or from another package in Business Central. The rule name is defined in the format rule "name" below the package , import , and other lines that apply to the entire DRL file. The same rule name cannot be used more than once in the same package. Optional rule attributes (such as salience or no-loop ) that define rule behavior are below the rule name, before the when section. Procedure In the DRL designer, enter then after the when section of the rule to begin adding action statements. Enter one or more actions to be executed on fact patterns based on the conditions for the rule. The following are some of the keyword options for defining DRL actions: set : Use this to set the value of a field. modify : Use this to specify fields to be modified for a fact and to notify the decision engine of the change. This method provides a structured approach to fact updates. It combines the update operation with setter calls to change object fields. update : Use this to specify fields and the entire related fact to be updated and to notify the decision engine of the change. After a fact has changed, you must call update before changing another fact that might be affected by the updated values. To avoid this added step, use the modify method instead. insert : Use this to insert a new fact into the decision engine. insertLogical : Use this to insert a new fact logically into the decision engine. The decision engine is responsible for logical decisions on insertions and retractions of facts. After regular or stated insertions, facts must be retracted explicitly. After logical insertions, the facts that were inserted are automatically retracted when the conditions in the rules that inserted the facts are no longer true. delete : Use this to remove an object from the decision engine. The keyword retract is also supported in DRL and executes the same action, but delete is typically preferred in DRL code for consistency with the keyword insert . Note For more information about DRL rule actions, see Section 16.9, "Rule actions in DRL (THEN)" . After you define all action components of the rule, click Validate in the upper-right toolbar of the DRL designer to validate the DRL file. If the file validation fails, address any problems described in the error message, review all syntax and components in the DRL file, and try again to validate the file until the file passes. Click Save in the DRL designer to save your work.
[ "package import function // Optional query // Optional declare // Optional global // Optional rule \"rule name\" // Attributes when // Conditions then // Actions end rule \"rule2 name\"", "import org.mortgages.LoanApplication;", "function String hello(String applicantName) { return \"Hello \" + applicantName + \"!\"; } rule \"Using a function\" when // Empty then System.out.println( hello( \"James\" ) ); end", "import function my.package.applicant.hello; rule \"Using a function\" when // Empty then System.out.println( hello( \"James\" ) ); end", "query \"people under the age of 21\" USDperson : Person( age < 21 ) end", "QueryResults results = ksession.getQueryResults( \"people under the age of 21\" ); System.out.println( \"we have \" + results.size() + \" people under the age of 21\" );", "declare Person name : String dateOfBirth : java.util.Date address : Address end rule \"Using a declared type\" when USDp : Person( name == \"James\" ) then // Insert Mark, who is a customer of James. Person mark = new Person(); mark.setName( \"Mark\" ); insert( mark ); end", "List<String> list = new ArrayList<>(); KieSession kieSession = kiebase.newKieSession(); kieSession.setGlobal( \"myGlobalList\", list );", "global java.util.List myGlobalList; rule \"Using a global\" when // Empty then myGlobalList.add( \"My global list\" ); end", "rule \"Underage\" salience 15 when USDapplication : LoanApplication() Applicant( age < 21 ) then USDapplication.setApproved( false ); USDapplication.setExplanation( \"Underage\" ); end", "package org.mortgages; import org.mortgages.LoanApplication; import org.mortgages.Bankruptcy; import org.mortgages.Applicant; rule \"Bankruptcy history\" salience 10 when USDa : LoanApplication() exists (Bankruptcy( yearOfOccurrence > 1990 || amountOwed > 10000 )) then USDa.setApproved( false ); USDa.setExplanation( \"has been bankrupt\" ); delete( USDa ); end rule \"Underage\" salience 15 when USDapplication : LoanApplication() Applicant( age < 21 ) then USDapplication.setApproved( false ); USDapplication.setExplanation( \"Underage\" ); delete( USDapplication ); end", "rule \"Always insert applicant\" when // Empty then // Actions to be executed once insert( new Applicant() ); end // The rule is internally rewritten in the following way: rule \"Always insert applicant\" when eval( true ) then insert( new Applicant() ); end", "when Applicant()", "when Applicant( age < 21 )", "when USDa : LoanApplication() Applicant( age < 21 ) then USDa.setApproved( false ); USDa.setExplanation( \"Underage\" )", "// All of the following examples are interpreted the same way: USDa : LoanApplication() and Applicant( age < 21 ) USDa : LoanApplication() and Applicant( age < 21 ) USDa : LoanApplication() Applicant( age < 21 ) (and USDa : LoanApplication() Applicant( age < 21 ))", "// All of the following examples are interpreted the same way: Bankruptcy( amountOwed == 100000 ) or IncomeSource( amount == 20000 ) Bankruptcy( amountOwed == 100000 ) or IncomeSource( amount == 20000 ) (or Bankruptcy( amountOwed == 100000 ) IncomeSource( amount == 20000 ))", "exists ( Bankruptcy( yearOfOccurrence > 1990 || amountOwed > 10000 ) )", "not ( Applicant( age < 21 ) )", "forall( USDapp : Applicant( age < 21 ) Applicant( this == USDapp, status = 'underage' ) )", "Applicant( ApplicantAddress : address ) Address( zipcode == \"23920W\" ) from ApplicantAddress", "Applicant() from entry-point \"LoanApplication\"", "USDm : Mortgage() USDa : List( size >= 3 ) from collect( LoanApplication( Mortgage == USDm, status == 'pending' ) )", "USDs : Sensor() accumulate( Reading( sensor == USDs, USDtemp : temperature ); USDmin : min( USDtemp ), USDmax : max( USDtemp ), USDavg : average( USDtemp ); USDmin < 20, USDavg > 70 )", "USDapplication.setApproved ( false ); USDapplication.setExplanation( \"has been bankrupt\" );", "modify( LoanApplication ) { setAmount( 100 ), setApproved ( true ) }", "LoanApplication.setAmount( 100 ); update( LoanApplication );", "insert( new Applicant() );", "insertLogical( new Applicant() );", "delete( Applicant );" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/drl-rules-central-create-proc
Chapter 1. Overview
Chapter 1. Overview Red Hat Quay includes the following features: High availability Geo-replication Repository mirroring Docker v2, schema 2 (multi-arch) support Continuous integration Security scanning with Clair Custom log rotation Zero downtime garbage collection 24/7 support Red Hat Quay provides support for the following: Multiple authentication and access methods Multiple storage backends Custom certificates for Quay, Clair, and storage backends Application registries Different container image types 1.1. Architecture Red Hat Quay includes several core components, both internal and external. 1.1.1. Internal components Red Hat Quay includes the following internal components: Quay (container registry) . Runs the Quay container as a service, consisting of several components in the pod. Clair . Scans container images for vulnerabilities and suggests fixes. 1.1.2. External components Red Hat Quay includes the following external components: Database . Used by Red Hat Quay as its primary metadata storage. Note that this is not for image storage. Redis (key-value store) . Stores live builder logs and the Red Hat Quay tutorial. Also includes the locking mechanism that is required for garbage collection. Cloud storage . For supported deployments, one of the following storage types must be used: Public cloud storage . In public cloud environments, you should use the cloud provider's object storage, such as Amazon Web Services's Amazon S3 or Google Cloud's Google Cloud Storage. Private cloud storage . In private clouds, an S3 or Swift compliant Object Store is needed, such as Ceph RADOS, or OpenStack Swift. Warning Do not use "Locally mounted directory" Storage Engine for any production configurations. Mounted NFS volumes are not supported. Local storage is meant for Red Hat Quay test-only installations.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploy_red_hat_quay_for_proof-of-concept_non-production_purposes/poc-overview
Chapter 3. Clustering
Chapter 3. Clustering The PCS cluster stop operation now completes successfully when cluster nodes include resources that require DLM When stopping the cluster on all nodes by running pcs cluster stop --all , resources that require the Distributed Lock Manager (DLM), such as gfs2 or clustered logical volumes, in some cases lost quorum before they were able to shut down. As a consequence, the stop operation became unresponsive. With this update, pcs cluster stop --all stops the cman service on all nodes only after Pacemaker has stopped those nodes. As a result, quorum is maintained while all resources are stopping, and the operation is thus able to complete successfully. (BZ# 1322595 , BZ#1353738) The rgmanager daemon can now correctly start clustered services on surviving nodes when quorum is regained With central processing mode enabled, when quorum was dissolved and regained, the rgmanager daemon stopped working on a surviving cluster node. With this update, the configuration tree is repopulated after quorum is regained. As a result, clustered services start up on the surviving cluster node as expected in the described scenario. (BZ#1084053) Short time between the start of rgmanager and clustat no longer leads to rgmanager crashing When the clustat utility was run shortly after the rgmanager daemon started but before it completely finished initializing, rgmanager was susceptible to unexpected termination. This bug has been fixed and rgmanager now starts without crashing in this scenario. (BZ#1228170) rgmanager exits without problems after cman is stopped When the cman service was stopped before the rgmanager daemon, rgmanager in some cases exited unexpectedly on cluster nodes. With this update, the cpg_lock() function has been fixed and rgmanager exits gracefully in the described scenario. (BZ#1342825) Time-related values of cluster resource configuration are now evaluated properly Previously, time-related resource values in actual use could differ from the values configured in the cluster.conf file, especially at the initial configuration load. This could cause the rgmanager daemon to behave unpredictably. With this fix, rgmanager behaves exactly as configured with regards to resources and respective time-related values. (BZ# 1414139 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/bug_fixes_clustering
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.2_release_notes/proc_providing-feedback-on-red-hat-documentation_release-notes
Chapter 14. code
Chapter 14. code This chapter describes the commands under the code command. 14.1. code source content show Show workflow definition. Usage: Table 14.1. Positional arguments Value Summary identifier Code source id or name. Table 14.2. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the code source from. 14.2. code source create Create new code source. Usage: Table 14.3. Positional arguments Value Summary name Code source name. content Code source content file. Table 14.4. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to create the code source within. --public With this flag the code source will be marked as "public". Table 14.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 14.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 14.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 14.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 14.3. code source delete Delete workflow. Usage: Table 14.9. Positional arguments Value Summary identifier Code source name or id (can be repeated multiple times). Table 14.10. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to delete the code source(s) from. 14.4. code source list List all workflows. Usage: Table 14.11. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 14.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 14.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 14.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 14.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 14.5. code source show Show specific code source. Usage: Table 14.16. Positional arguments Value Summary identifier Code source id or name. Table 14.17. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the code source from. Table 14.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 14.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 14.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 14.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 14.6. code source update Update workflow. Usage: Table 14.22. Positional arguments Value Summary identifier Code source identifier (name or id). content Code source content Table 14.23. Command arguments Value Summary -h, --help Show this help message and exit --id ID Workflow id. --namespace [NAMESPACE] Namespace of the workflow. --public With this flag workflow will be marked as "public". Table 14.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 14.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 14.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 14.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack code source content show [-h] [--namespace [NAMESPACE]] identifier", "openstack code source create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] [--public] name content", "openstack code source delete [-h] [--namespace [NAMESPACE]] identifier [identifier ...]", "openstack code source list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack code source show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] identifier", "openstack code source update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--id ID] [--namespace [NAMESPACE]] [--public] identifier content" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/code
Chapter 6. GenericKafkaListener schema reference
Chapter 6. GenericKafkaListener schema reference Used in: KafkaClusterSpec Full list of GenericKafkaListener schema properties Configures listeners to connect to Kafka brokers within and outside OpenShift. You configure the listeners in the Kafka resource. Example Kafka resource showing listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: #... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... 6.1. listeners You configure Kafka broker listeners using the listeners property in the Kafka resource. Listeners are defined as an array. Example listener configuration listeners: - name: plain port: 9092 type: internal tls: false The name and port must be unique within the Kafka cluster. By specifying a unique name and port for each listener, you can configure multiple listeners. The name can be up to 25 characters long, comprising lower-case letters and numbers. 6.2. port The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client. loadbalancer listeners use the specified port number, as do internal and cluster-ip listeners ingress and route listeners use port 443 for access nodeport listeners use the port number assigned by OpenShift For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka resource. Example command to retrieve the address and port for client connection oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==" <listener_name> ")].bootstrapServers}{"\n"}' Important When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999). 6.3. type The type is set as internal , or for external listeners, as route , loadbalancer , nodeport , ingress or cluster-ip . You can also configure a cluster-ip listener, a type of internal listener you can use to build custom access mechanisms. internal You can configure internal listeners with or without encryption using the tls property. Example internal listener configuration #... spec: kafka: #... listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #... route Configures an external listener to expose Kafka using OpenShift Routes and the HAProxy router. A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Example route listener configuration #... spec: kafka: #... listeners: #... - name: external1 port: 9094 type: route tls: true #... ingress Configures an external listener to expose Kafka using Kubernetes Ingress and the Ingress NGINX Controller for Kubernetes . A dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is 9095 in the following example. You must specify the hostnames used by the bootstrap and per-broker services using GenericKafkaListenerConfigurationBootstrap and GenericKafkaListenerConfigurationBroker properties. Example ingress listener configuration #... spec: kafka: #... listeners: #... - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... Note External listeners using Ingress are currently only tested with the Ingress NGINX Controller for Kubernetes . loadbalancer Configures an external listener to expose Kafka using a Loadbalancer type Service . A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port 9094 in the following example. You can use the loadBalancerSourceRanges property to configure source ranges to restrict access to the specified IP addresses. Example loadbalancer listener configuration #... spec: kafka: #... listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #... nodeport Configures an external listener to expose Kafka using a NodePort type Service . Kafka clients connect directly to the nodes of OpenShift. An additional NodePort type of service is created to serve as a Kafka bootstrap address. When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. You can use preferredNodePortAddressType property to configure the first address type checked as the node address . Example nodeport listener configuration #... spec: kafka: #... listeners: #... - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #... Note TLS hostname verification is not currently supported when exposing Kafka clusters using node ports. cluster-ip Configures an internal listener to expose Kafka using a per-broker ClusterIP type Service . The listener does not use a headless service and its DNS names to route traffic to Kafka brokers. You can use this type of listener to expose a Kafka cluster when using the headless service is unsuitable. You might use it with a custom access mechanism, such as one that uses a specific Ingress controller or the OpenShift Gateway API. A new ClusterIP service is created for each Kafka broker pod. The service is assigned a ClusterIP address to serve as a Kafka bootstrap address with a per-broker port number. For example, you can configure the listener to expose a Kafka cluster over an Nginx Ingress Controller with TCP port configuration. Example cluster-ip listener configuration #... spec: kafka: #... listeners: - name: external-cluster-ip type: cluster-ip tls: false port: 9096 #... 6.4. tls The TLS property is required. By default, TLS encryption is not enabled. To enable it, set the tls property to true . For route and ingress type listeners, TLS encryption must be enabled. 6.5. authentication Authentication for the listener can be specified as: mTLS ( tls ) SCRAM-SHA-512 ( scram-sha-512 ) Token-based OAuth 2.0 ( oauth ) Custom ( custom ) 6.6. networkPolicyPeers Use networkPolicyPeers to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers configuration for a plain and a tls listener. In the following example: Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker. Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener. The syntax of the networkPolicyPeers property is the same as the from property in NetworkPolicy resources. Example network policy configuration listeners: #... - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ... 6.7. GenericKafkaListener schema properties Property Description name Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long. string port Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. integer type Type of the listener. Currently the supported types are internal , route , loadbalancer , nodeport and ingress . internal type exposes Kafka internally only within the OpenShift cluster. route type uses OpenShift Routes to expose Kafka. loadbalancer type uses LoadBalancer type services to expose Kafka. nodeport type uses NodePort type services to expose Kafka. ingress type uses OpenShift Nginx Ingress to expose Kafka with TLS passthrough. cluster-ip type uses a per-broker ClusterIP service. string (one of [ingress, internal, route, loadbalancer, cluster-ip, nodeport]) tls Enables TLS encryption on the listener. This is a required property. boolean authentication Authentication configuration for this listener. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth, custom]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth , KafkaListenerAuthenticationCustom configuration Additional listener configuration. GenericKafkaListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "listeners: - name: plain port: 9092 type: internal tls: false", "get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'", "# spec: kafka: # listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #", "# spec: kafka: # listeners: # - name: external1 port: 9094 type: route tls: true #", "# spec: kafka: # listeners: # - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "# spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "# spec: kafka: # listeners: # - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "# spec: kafka: # listeners: - name: external-cluster-ip type: cluster-ip tls: false port: 9096 #", "listeners: # - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-generickafkalistener-reference
8.121. libservicelog
8.121. libservicelog 8.121.1. RHBA-2014:1430 - libservicelog bug fix and enhancement update Updated libservicelog packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libservicelog packages provide a library for logging service-related events to the servicelog database, and a number of command-line utilities for viewing the contents of the database. Note The libservicelog packages have been upgraded to upstream version 1.1.13, which provides a number of bug fixes and enhancements over the version, including support for SQL insert command input string. (BZ# 739120 ) Users of libservicelog are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libservicelog
Chapter 2. Learn more about OpenShift Container Platform
Chapter 2. Learn more about OpenShift Container Platform Use the following sections to find content to help you learn about and use OpenShift Container Platform. 2.1. Architect Learn about OpenShift Container Platform Plan an OpenShift Container Platform deployment Additional resources Enterprise Kubernetes with OpenShift Tested platforms OpenShift blog Architecture Security and compliance What's new in OpenShift Container Platform Networking OpenShift Container Platform life cycle Backup and restore 2.2. Cluster Administrator Learn about OpenShift Container Platform Deploy OpenShift Container Platform Manage OpenShift Container Platform Additional resources Enterprise Kubernetes with OpenShift Installing OpenShift Container Platform Using Insights to identify issues with your cluster Getting Support Architecture Post installation configuration Logging OpenShift Knowledgebase articles OpenShift Interactive Learning Portal Networking Monitoring OpenShift Container Platform Life Cycle Storage Backup and restore Updating a cluster 2.3. Application Site Reliability Engineer (App SRE) Learn about OpenShift Container Platform Deploy and manage applications Additional resources OpenShift Interactive Learning Portal Projects Getting Support Architecture Operators OpenShift Knowledgebase articles Logging OpenShift Container Platform Life Cycle Blogs about logging Monitoring 2.4. Developer Learn about application development in OpenShift Container Platform Deploy applications Getting started with OpenShift for developers (interactive tutorial) Creating applications Red Hat Developers site Builds Red Hat CodeReady Workspaces Operators Images Developer-focused CLI
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/about/learn_more_about_openshift
Chapter 6. UserOAuthAccessToken [oauth.openshift.io/v1]
Chapter 6. UserOAuthAccessToken [oauth.openshift.io/v1] Description UserOAuthAccessToken is a virtual resource to mirror OAuthAccessTokens to the user the access token was issued for Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources authorizeToken string AuthorizeToken contains the token that authorized this token clientName string ClientName references the client that created this token. expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. inactivityTimeoutSeconds integer InactivityTimeoutSeconds is the value in seconds, from the CreationTimestamp, after which this token can no longer be used. The value is automatically incremented when the token is used. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURI string RedirectURI is the redirection associated with the token. refreshToken string RefreshToken is the value by which this token can be renewed. Can be blank. scopes array (string) Scopes is an array of the requested scopes. userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token 6.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/useroauthaccesstokens GET : list or watch objects of kind UserOAuthAccessToken /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens GET : watch individual changes to a list of UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/useroauthaccesstokens/{name} DELETE : delete an UserOAuthAccessToken GET : read the specified UserOAuthAccessToken /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens/{name} GET : watch changes to an object of kind UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/oauth.openshift.io/v1/useroauthaccesstokens Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind UserOAuthAccessToken Table 6.2. HTTP responses HTTP code Reponse body 200 - OK UserOAuthAccessTokenList schema 401 - Unauthorized Empty 6.2.2. /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens Table 6.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. Table 6.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/oauth.openshift.io/v1/useroauthaccesstokens/{name} Table 6.5. Global path parameters Parameter Type Description name string name of the UserOAuthAccessToken Table 6.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an UserOAuthAccessToken Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.8. Body parameters Parameter Type Description body DeleteOptions schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified UserOAuthAccessToken Table 6.10. HTTP responses HTTP code Reponse body 200 - OK UserOAuthAccessToken schema 401 - Unauthorized Empty 6.2.4. /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens/{name} Table 6.11. Global path parameters Parameter Type Description name string name of the UserOAuthAccessToken Table 6.12. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.13. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/oauth_apis/useroauthaccesstoken-oauth-openshift-io-v1
Chapter 8. HTTP configuration
Chapter 8. HTTP configuration 8.1. Global HTTPS redirection HTTPS redirection provides redirection for incoming HTTP requests. These redirected HTTP requests are encrypted. You can enable HTTPS redirection for all services on the cluster by configuring the httpProtocol spec for the KnativeServing custom resource (CR). 8.1.1. HTTPS redirection global settings Example KnativeServing CR that enables HTTPS redirection apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: "redirected" ... 8.2. HTTPS redirection per service You can enable or disable HTTPS redirection for a service by configuring the networking.knative.dev/http-option annotation. 8.2.1. Redirecting HTTPS for a service The following example shows how you can use this annotation in a Knative Service YAML object: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-protocol: "redirected" spec: ... 8.3. Full duplex support for HTTP/1 You can enable the HTTP/1 full duplex support for a service by configuring the features.knative.dev/http-full-duplex annotation. Note Verify your HTTP clients before enabling, as earlier version clients might not provide support for HTTP/1 full duplex. The following example shows how you can use this annotation in a Knative Service YAML object at the revision spec level: Example KnativeServing CR that provides full duplex support for HTTP/1 apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: spec: annotations: features.knative.dev/http-full-duplex: "Enabled" ...
[ "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: \"redirected\"", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-protocol: \"redirected\" spec:", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: spec: annotations: features.knative.dev/http-full-duplex: \"Enabled\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/serving/http-configuration
Chapter 13. AWS S3 Source
Chapter 13. AWS S3 Source Receive data from AWS S3. 13.1. Configuration Options The following table summarizes the configuration options available for the aws-s3-source Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string bucketNameOrArn * Bucket Name The S3 Bucket name or ARN string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string autoCreateBucket Autocreate Bucket Setting the autocreation of the S3 bucket bucketName. boolean false deleteAfterRead Auto-delete Objects Delete objects after consuming them boolean true Note Fields marked with an asterisk (*) are mandatory. 13.2. Dependencies At runtime, the aws-s3-source Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:aws2-s3 13.3. Usage This section describes how you can use the aws-s3-source . 13.3.1. Knative Source You can use the aws-s3-source Kamelet as a Knative source by binding it to a Knative object. aws-s3-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-source properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 13.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 13.3.1.2. Procedure for using the cluster CLI Save the aws-s3-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f aws-s3-source-binding.yaml 13.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 13.3.2. Kafka Source You can use the aws-s3-source Kamelet as a Kafka source by binding it to a Kafka topic. aws-s3-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-source properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 13.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 13.3.2.2. Procedure for using the cluster CLI Save the aws-s3-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f aws-s3-source-binding.yaml 13.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 13.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-s3-source.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-source properties: accessKey: \"The Access Key\" bucketNameOrArn: \"The Bucket Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f aws-s3-source-binding.yaml", "kamel bind aws-s3-source -p \"source.accessKey=The Access Key\" -p \"source.bucketNameOrArn=The Bucket Name\" -p \"source.region=eu-west-1\" -p \"source.secretKey=The Secret Key\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-source properties: accessKey: \"The Access Key\" bucketNameOrArn: \"The Bucket Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f aws-s3-source-binding.yaml", "kamel bind aws-s3-source -p \"source.accessKey=The Access Key\" -p \"source.bucketNameOrArn=The Bucket Name\" -p \"source.region=eu-west-1\" -p \"source.secretKey=The Secret Key\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/aws-s3-source
Appendix A. Example of Setting Up Apache HTTP Server
Appendix A. Example of Setting Up Apache HTTP Server This appendix provides an example of setting up a highly available Apache HTTP Server on a Red Hat Cluster. The example describes how to set up a service to fail over an Apache HTTP Server. Variables in the example apply to this example only; they are provided to assist setting up a service that suits your requirements. Note This example uses the Cluster Configuration Tool ( system-config-cluster ). You can use comparable Conga functions to make an Apache HTTP Server highly available on a Red Hat Cluster. A.1. Apache HTTP Server Setup Overview First, configure Apache HTTP Server on all nodes in the cluster. If using a failover domain , assign the service to all cluster nodes configured to run the Apache HTTP Server. Refer to Section 5.6, "Configuring a Failover Domain" for instructions. The cluster software ensures that only one cluster system runs the Apache HTTP Server at one time. The example configuration consists of installing the httpd RPM package on all cluster nodes (or on nodes in the failover domain, if used) and configuring a shared GFS shared resource for the Web content. When installing the Apache HTTP Server on the cluster systems, run the following command to ensure that the cluster nodes do not automatically start the service when the system boots: Rather than having the system init scripts spawn the httpd daemon, the cluster infrastructure initializes the service on the active cluster node. This ensures that the corresponding IP address and file system mounts are active on only one cluster node at a time. When adding an httpd service, a floating IP address must be assigned to the service so that the IP address will transfer from one cluster node to another in the event of failover or service relocation. The cluster infrastructure binds this IP address to the network interface on the cluster system that is currently running the Apache HTTP Server. This IP address ensures that the cluster node running httpd is transparent to the clients accessing the service. The file systems that contain the Web content cannot be automatically mounted on the shared storage resource when the cluster nodes boot. Instead, the cluster software must mount and unmount the file system as the httpd service is started and stopped. This prevents the cluster systems from accessing the same data simultaneously, which may result in data corruption. Therefore, do not include the file systems in the /etc/fstab file.
[ "chkconfig --del httpd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/ap-httpd-service-ca
Chapter 21. Configuring emails in task notification
Chapter 21. Configuring emails in task notification Earlier it was possible to send notifications only to users or group of users in Business Central. Now you can directly add any email addresses as well. Prerequisites You have created a project in Business Central. Procedure Create a business process. For more information about creating a business process in Business Central, see Chapter 5, Creating a business process in Business Central . Create a user task. For more information about creating a user task in Business Central, see Section 5.4, "Creating user tasks" . In the upper-right corner of the screen, click the Properties icon. Expand Implementation/Execution and click to Notifications , to open the Notifications window. Click Add . In the Notifications window, enter an email address in the To: email(s) field to set the recipients of the task notification emails. You can add multiple email addresses separated by comma. Enter the subject and body of the email. Click Ok . You can see the added email addresses in the To: email(s) column in the Notifications window. Click Ok .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/configuring-emails-in-task-notification-proc
Chapter 4. API requests in different languages
Chapter 4. API requests in different languages This chapter outlines sending API requests to Red Hat Satellite with curl, Ruby, and Python and provides examples. 4.1. API requests with curl This section outlines how to use curl with the Satellite API to perform various tasks. Red Hat Satellite requires the use of HTTPS, and by default a certificate for host identification. If you have not added the Satellite Server certificate as described in Section 3.1, "SSL authentication overview" , then you can use the --insecure option to bypass certificate checks. For user authentication, you can use the --user option to provide Satellite user credentials in the form --user username:password or, if you do not include the password, the command prompts you to enter it. To reduce security risks, do not include the password as part of the command, because it then becomes part of your shell history. Examples in this section include the password only for the sake of simplicity. Be aware that if you use the --silent option, curl does not display a progress meter or any error messages. Examples in this chapter use the Python json.tool module to format the output. 4.1.1. Passing JSON data to the API request You can pass data to Satellite Server with the API request. The data must be in JSON format. When specifying JSON data with the --data option, you must set the following HTTP headers with the --header option: Use one of the following options to include data with the --data option: The quoted JSON formatted data enclosed in curly braces {} . When passing a value for a JSON type parameter, you must escape quotation marks " with backslashes \ . For example, within curly braces, you must format "Example JSON Variable" as \"Example JSON Variable\" : The unquoted JSON formatted data enclosed in a file and specified by the @ sign and the filename. For example: Using external files for JSON formatted data has the following advantages: You can use your favorite text editor. You can use syntax checker to find and avoid mistakes. You can use tools to check the validity of JSON data or to reformat it. Validating a JSON file Use the json_verify tool to check the validity of a JSON file: 4.1.2. Retrieving a list of resources This section outlines how to use curl with the Satellite 6 API to request information from your Satellite deployment. These examples include both requests and responses. Expect different results for each deployment. Listing users This example is a basic request that returns a list of Satellite resources, Satellite users in this case. Such requests return a list of data wrapped in metadata, while other request types only return the actual object. Example request: Example response: 4.1.3. Creating and modifying resources This section outlines how to use curl with the Satellite 6 API to manipulate resources on the Satellite Server. These API calls require that you pass data in json format with the API call. For more information, see Section 4.1.1, "Passing JSON data to the API request" . Creating a user This example creates a user using --data option to provide required information. Example request: Modifying a user This example modifies first name and login of the test_user that was created in Creating a user . Example request: 4.2. API requests with Ruby This section outlines how to use Ruby with the Satellite API to perform various tasks. Important These are example scripts and commands. Ensure you review these scripts carefully before use, and replace any variables, user names, passwords, and other information to suit your own deployment. 4.2.1. Creating objects using Ruby This script connects to the Red Hat Satellite 6 API and creates an organization, and then creates three environments in the organization. If the organization already exists, the script uses that organization. If any of the environments already exist in the organization, the script raises an error and quits. #!/usr/bin/ruby require 'rest-client' require 'json' url = 'https://satellite.example.com/api/v2/' katello_url = "#{url}/katello/api/v2/" USDusername = 'admin' USDpassword = 'changeme' org_name = "MyOrg" environments = [ "Development", "Testing", "Production" ] # Performs a GET using the passed URL location def get_json(location) response = RestClient::Request.new( :method => :get, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json } ).execute JSON.parse(response.to_str) end # Performs a POST and passes the data to the URL location def post_json(location, json_data) response = RestClient::Request.new( :method => :post, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json}, :payload => json_data ).execute JSON.parse(response.to_str) end # Creates a hash with ids mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end # Get list of existing organizations orgs = get_json("#{katello_url}/organizations") org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts "Creating organization: \t#{org_name}" org_id = post_json("#{katello_url}/organizations", JSON.generate({"name"=> org_name}))["id"] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts "Organization \"#{org_name}\" exists" end # Get list of organization's lifecycle environments envs = get_json("#{katello_url}/organizations/#{org_id}/environments") env_list = id_name_map(envs['results']) prior_env_id = env_list.key("Library") # Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts "ERROR: One of the Environments is not unique to organization" exit end end # Create life cycle environments environments.each do |environment| puts "Creating environment: \t#{environment}" prior_env_id = post_json("#{katello_url}/organizations/#{org_id}/environments", JSON.generate({"name" => environment, "organization_id" => org_id, "prior_id" => prior_env_id}))["id"] end 4.2.2. Using apipie bindings with Ruby Apipie bindings are the Ruby bindings for apipie documented API calls. They fetch and cache the API definition from Satellite and then generate API calls on demand. This example creates an organization, and then creates three environments in the organization. If the organization already exists, the script uses that organization. If any of the environments already exist in the organization, the script raises an error and quits. #!/usr/bin/tfm-ruby require 'apipie-bindings' org_name = "MyOrg" environments = [ "Development", "Testing", "Production" ] # Create an instance of apipie bindings @api = ApipieBindings::API.new({ :uri => 'https://satellite.example.com/', :username => 'admin', :password => 'changeme', :api_version => 2 }) # Performs an API call with default options def call_api(resource_name, action_name, params = {}) http_headers = {} apipie_options = { :skip_validation => true } @api.resource(resource_name).call(action_name, params, http_headers, apipie_options) end # Creates a hash with IDs mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end # Get list of existing organizations orgs = call_api(:organizations, :index) org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts "Creating organization: \t#{org_name}" org_id = call_api(:organizations, :create, {'organization' => { :name => org_name }})['id'] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts "Organization \"#{org_name}\" exists" end # Get list of organization's life cycle environments envs = call_api(:lifecycle_environments, :index, {'organization_id' => org_id}) env_list = id_name_map(envs['results']) prior_env_id = env_list.key("Library") # Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts "ERROR: One of the Environments is not unique to organization" exit end end # Create life cycle environments environments.each do |environment| puts "Creating environment: \t#{environment}" prior_env_id = call_api(:lifecycle_environments, :create, {"name" => environment, "organization_id" => org_id, "prior_id" => prior_env_id })['id'] end 4.3. API requests with Python This section outlines how to use Python with the Satellite API to perform various tasks. Important These are example scripts and commands. Ensure you review these scripts carefully before use, and replace any variables, user names, passwords, and other information to suit your own deployment. Example scripts in this section do not use SSL verification for interacting with the REST API. 4.3.1. Creating objects using Python This script connects to the Red Hat Satellite 6 API and creates an organization, and then creates three environments in the organization. If the organization already exists, the script uses that organization. If any of the environments already exist in the organization, the script raises an error and quits. Python 2 example #!/usr/bin/python import json import sys try: import requests except ImportError: print "Please install the python-requests module." sys.exit(-1) # URL to your Satellite 6 server URL = "https://satellite.example.com" # URL for the API to your deployed Satellite 6 server SAT_API = "%s/katello/api/v2/" % URL # Katello-specific API KATELLO_API = "%s/katello/api/" % URL POST_HEADERS = {'content-type': 'application/json'} # Default credentials to login to Satellite 6 USERNAME = "admin" PASSWORD = "changeme" # Ignore SSL for now SSL_VERIFY = False # Name of the organization to be either created or used ORG_NAME = "MyOrg" # Name for life cycle environments to be either created or used ENVIRONMENTS = ["Development", "Testing", "Production"] def get_json(location): """ Performs a GET using the passed URL location """ r = requests.get(location, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def post_json(location, json_data): """ Performs a POST and passes the data to the URL location """ result = requests.post( location, data=json_data, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY, headers=POST_HEADERS) return result.json() def main(): """ Main routine that creates or re-uses an organization and life cycle environments. If life cycle environments already exist, exit out. """ # Check if our organization already exists org = get_json(SAT_API + "organizations/" + ORG_NAME) # If our organization is not found, create it if org.get('error', None): org_id = post_json( SAT_API + "organizations/", json.dumps({"name": ORG_NAME}))["id"] print "Creating organization: \t" + ORG_NAME else: # Our organization exists, so let's grab it org_id = org['id'] print "Organization '%s' exists." % ORG_NAME # Now, let's fetch all available life cycle environments for this org... envs = get_json( SAT_API + "organizations/" + str(org_id) + "/environments/") # ... and add them to a dictionary, with respective 'Prior' environment prior_env_id = 0 env_list = {} for env in envs['results']: env_list[env['id']] = env['name'] prior_env_id = env['id'] if env['name'] == "Library" else prior_env_id # Exit the script if at least one life cycle environment already exists if all(environment in env_list.values() for environment in ENVIRONMENTS): print "ERROR: One of the Environments is not unique to organization" sys.exit(-1) # Create life cycle environments for environment in ENVIRONMENTS: new_env_id = post_json( SAT_API + "organizations/" + str(org_id) + "/environments/", json.dumps( { "name": environment, "organization_id": org_id, "prior": prior_env_id} ))["id"] print "Creating environment: \t" + environment prior_env_id = new_env_id if __name__ == "__main__": main() 4.3.2. Requesting information from the API using Python This is an example script that uses Python for various API requests. Python 2 example #!/usr/bin/python import json import sys try: import requests except ImportError: print "Please install the python-requests module." sys.exit(-1) SAT_API = 'https://satellite.example.com/api/v2/' USERNAME = "admin" PASSWORD = "password" SSL_VERIFY = False # Ignore SSL for now def get_json(url): # Performs a GET using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print "Error: " + jsn['error']['message'] else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print "No results found" return None def display_all_results(url): results = get_results(url) if results: print json.dumps(results, indent=4, sort_keys=True) def display_info_for_hosts(url): hosts = get_results(url) if hosts: for host in hosts: print "ID: %-10d Name: %-30s IP: %-20s OS: %-30s" % (host['id'], host['name'], host['ip'], host['operatingsystem_name']) def main(): host = 'satellite.example.com' print "Displaying all info for host %s ..." % host display_all_results(SAT_API + 'hosts/' + host) print "Displaying all facts for host %s ..." % host display_all_results(SAT_API + 'hosts/%s/facts' % host) host_pattern = 'example' print "Displaying basic info for hosts matching pattern '%s'..." % host_pattern display_info_for_hosts(SAT_API + 'hosts?search=' + host_pattern) environment = 'production' print "Displaying basic info for hosts in environment %s..." % environment display_info_for_hosts(SAT_API + 'hosts?search=environment=' + environment) model = 'RHEV Hypervisor' print "Displaying basic info for hosts with model name %s..." % model display_info_for_hosts(SAT_API + 'hosts?search=model="' + model + '"') if __name__ == "__main__": main() Python 3 example #!/usr/bin/env python3 import json import sys try: import requests except ImportError: print("Please install the python-requests module.") sys.exit(-1) SAT = "satellite.example.com" # URL for the API to your deployed Satellite 6 server SAT_API = f"https://{SAT}/api/" KATELLO_API = f"https://{SAT}/katello/api/v2/" POST_HEADERS = {'content-type': 'application/json'} # Default credentials to login to Satellite 6 USERNAME = "admin" PASSWORD = "password" # Ignore SSL for now SSL_VERIFY = False #SSL_VERIFY = "./path/to/CA-certificate.crt" # Put the path to your CA certificate here to allow SSL_VERIFY def get_json(url): # Performs a GET using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print("Error: " + jsn['error']['message']) else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print("No results found") return None def display_all_results(url): results = get_results(url) if results: print(json.dumps(results, indent=4, sort_keys=True)) def display_info_for_hosts(url): hosts = get_results(url) if hosts: print(f"{'ID':10}{'Name':40}{'IP':30}{'Operating System':30}") for host in hosts: print(f"{str(host['id']):10}{host['name']:40}{str(host['ip']):30}{str(host['operatingsystem_name']):30}") def display_info_for_subs(url): subs = get_results(url) if subs: print(f"{'ID':10}{'Name':90}{'Start Date':30}") for sub in subs: print(f"{str(sub['id']):10}{sub['name']:90}{str(sub['start_date']):30}") def main(): host = SAT print(f"Displaying all info for host {host} ...") display_all_results(SAT_API + 'hosts/' + host) print(f"Displaying all facts for host {host} ...") display_all_results(SAT_API + f'hosts/{host}/facts') host_pattern = 'example' print(f"Displaying basic info for hosts matching pattern '{host_pattern}'...") display_info_for_hosts(SAT_API + 'hosts?per_page=1&search=name~' + host_pattern) print(f"Displaying basic info for subscriptions") display_info_for_subs(KATELLO_API + 'subscriptions') environment = 'production' print(f"Displaying basic info for hosts in environment {environment}...") display_info_for_hosts(SAT_API + 'hosts?search=environment=' + environment) if __name__ == "__main__": main()
[ "--header \"Accept:application/json\" --header \"Content-Type:application/json\"", "--data {\"id\":44, \"smart_class_parameter\":{\"override\":\"true\", \"parameter_type\":\"json\", \"default_value\":\"{\\\"GRUB_CMDLINE_LINUX\\\": {\\\"audit\\\":\\\"1\\\",\\\"crashkernel\\\":\\\"true\\\"}}\"}}", "--data @ file .json", "json_verify < test_file .json", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/users | python -m json.tool", "{ \"page\": 1, \"per_page\": 20, \"results\": [ { \"admin\": false, \"auth_source_id\": 1, \"auth_source_name\": \"Internal\", \"created_at\": \"2018-09-21 08:59:22 UTC\", \"default_location\": null, \"default_organization\": null, \"description\": \"\", \"effective_admin\": false, \"firstname\": \"\", \"id\": 5, \"last_login_on\": \"2018-09-21 09:03:25 UTC\", \"lastname\": \"\", \"locale\": null, \"locations\": [], \"login\": \"test\", \"mail\": \"[email protected]\", \"organizations\": [ { \"id\": 1, \"name\": \"Default Organization\" } ], \"ssh_keys\": [], \"timezone\": null, \"updated_at\": \"2018-09-21 09:04:45 UTC\" }, { \"admin\": true, \"auth_source_id\": 1, \"auth_source_name\": \"Internal\", \"created_at\": \"2018-09-20 07:09:41 UTC\", \"default_location\": null, \"default_organization\": { \"description\": null, \"id\": 1, \"name\": \"Default Organization\", \"title\": \"Default Organization\" }, \"description\": \"\", \"effective_admin\": true, \"firstname\": \"Admin\", \"id\": 4, \"last_login_on\": \"2018-12-07 07:31:09 UTC\", \"lastname\": \"User\", \"locale\": null, \"locations\": [ { \"id\": 2, \"name\": \"Default Location\" } ], \"login\": \"admin\", \"mail\": \"[email protected]\", \"organizations\": [ { \"id\": 1, \"name\": \"Default Organization\" } ], \"ssh_keys\": [], \"timezone\": null, \"updated_at\": \"2018-11-14 08:19:46 UTC\" } ], \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"subtotal\": 2, \"total\": 2 }", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data \"{\\\"firstname\\\":\\\" Test Name \\\",\\\"mail\\\":\\\" [email protected] \\\",\\\"login\\\":\\\" test_user \\\",\\\"password\\\":\\\" password123 \\\",\\\"auth_source_id\\\": 1 }\" https:// satellite.example.com /api/users | python -m json.tool", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data \"{\\\"firstname\\\":\\\" New Test Name \\\",\\\"mail\\\":\\\" [email protected] \\\",\\\"login\\\":\\\" new_test_user \\\",\\\"password\\\":\\\" password123 \\\",\\\"auth_source_id\\\": 1 }\" https:// satellite.example.com /api/users/ 8 | python -m json.tool", "#!/usr/bin/ruby require 'rest-client' require 'json' url = 'https://satellite.example.com/api/v2/' katello_url = \"#{url}/katello/api/v2/\" USDusername = 'admin' USDpassword = 'changeme' org_name = \"MyOrg\" environments = [ \"Development\", \"Testing\", \"Production\" ] Performs a GET using the passed URL location def get_json(location) response = RestClient::Request.new( :method => :get, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json } ).execute JSON.parse(response.to_str) end Performs a POST and passes the data to the URL location def post_json(location, json_data) response = RestClient::Request.new( :method => :post, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json}, :payload => json_data ).execute JSON.parse(response.to_str) end Creates a hash with ids mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end Get list of existing organizations orgs = get_json(\"#{katello_url}/organizations\") org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts \"Creating organization: \\t#{org_name}\" org_id = post_json(\"#{katello_url}/organizations\", JSON.generate({\"name\"=> org_name}))[\"id\"] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts \"Organization \\\"#{org_name}\\\" exists\" end Get list of organization's lifecycle environments envs = get_json(\"#{katello_url}/organizations/#{org_id}/environments\") env_list = id_name_map(envs['results']) prior_env_id = env_list.key(\"Library\") Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts \"ERROR: One of the Environments is not unique to organization\" exit end end # Create life cycle environments environments.each do |environment| puts \"Creating environment: \\t#{environment}\" prior_env_id = post_json(\"#{katello_url}/organizations/#{org_id}/environments\", JSON.generate({\"name\" => environment, \"organization_id\" => org_id, \"prior_id\" => prior_env_id}))[\"id\"] end", "#!/usr/bin/tfm-ruby require 'apipie-bindings' org_name = \"MyOrg\" environments = [ \"Development\", \"Testing\", \"Production\" ] Create an instance of apipie bindings @api = ApipieBindings::API.new({ :uri => 'https://satellite.example.com/', :username => 'admin', :password => 'changeme', :api_version => 2 }) Performs an API call with default options def call_api(resource_name, action_name, params = {}) http_headers = {} apipie_options = { :skip_validation => true } @api.resource(resource_name).call(action_name, params, http_headers, apipie_options) end Creates a hash with IDs mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end Get list of existing organizations orgs = call_api(:organizations, :index) org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts \"Creating organization: \\t#{org_name}\" org_id = call_api(:organizations, :create, {'organization' => { :name => org_name }})['id'] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts \"Organization \\\"#{org_name}\\\" exists\" end Get list of organization's life cycle environments envs = call_api(:lifecycle_environments, :index, {'organization_id' => org_id}) env_list = id_name_map(envs['results']) prior_env_id = env_list.key(\"Library\") Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts \"ERROR: One of the Environments is not unique to organization\" exit end end # Create life cycle environments environments.each do |environment| puts \"Creating environment: \\t#{environment}\" prior_env_id = call_api(:lifecycle_environments, :create, {\"name\" => environment, \"organization_id\" => org_id, \"prior_id\" => prior_env_id })['id'] end", "#!/usr/bin/python import json import sys try: import requests except ImportError: print \"Please install the python-requests module.\" sys.exit(-1) URL to your Satellite 6 server URL = \"https://satellite.example.com\" URL for the API to your deployed Satellite 6 server SAT_API = \"%s/katello/api/v2/\" % URL Katello-specific API KATELLO_API = \"%s/katello/api/\" % URL POST_HEADERS = {'content-type': 'application/json'} Default credentials to login to Satellite 6 USERNAME = \"admin\" PASSWORD = \"changeme\" Ignore SSL for now SSL_VERIFY = False Name of the organization to be either created or used ORG_NAME = \"MyOrg\" Name for life cycle environments to be either created or used ENVIRONMENTS = [\"Development\", \"Testing\", \"Production\"] def get_json(location): \"\"\" Performs a GET using the passed URL location \"\"\" r = requests.get(location, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def post_json(location, json_data): \"\"\" Performs a POST and passes the data to the URL location \"\"\" result = requests.post( location, data=json_data, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY, headers=POST_HEADERS) return result.json() def main(): \"\"\" Main routine that creates or re-uses an organization and life cycle environments. If life cycle environments already exist, exit out. \"\"\" # Check if our organization already exists org = get_json(SAT_API + \"organizations/\" + ORG_NAME) # If our organization is not found, create it if org.get('error', None): org_id = post_json( SAT_API + \"organizations/\", json.dumps({\"name\": ORG_NAME}))[\"id\"] print \"Creating organization: \\t\" + ORG_NAME else: # Our organization exists, so let's grab it org_id = org['id'] print \"Organization '%s' exists.\" % ORG_NAME # Now, let's fetch all available life cycle environments for this org envs = get_json( SAT_API + \"organizations/\" + str(org_id) + \"/environments/\") # ... and add them to a dictionary, with respective 'Prior' environment prior_env_id = 0 env_list = {} for env in envs['results']: env_list[env['id']] = env['name'] prior_env_id = env['id'] if env['name'] == \"Library\" else prior_env_id # Exit the script if at least one life cycle environment already exists if all(environment in env_list.values() for environment in ENVIRONMENTS): print \"ERROR: One of the Environments is not unique to organization\" sys.exit(-1) # Create life cycle environments for environment in ENVIRONMENTS: new_env_id = post_json( SAT_API + \"organizations/\" + str(org_id) + \"/environments/\", json.dumps( { \"name\": environment, \"organization_id\": org_id, \"prior\": prior_env_id} ))[\"id\"] print \"Creating environment: \\t\" + environment prior_env_id = new_env_id if __name__ == \"__main__\": main()", "#!/usr/bin/python import json import sys try: import requests except ImportError: print \"Please install the python-requests module.\" sys.exit(-1) SAT_API = 'https://satellite.example.com/api/v2/' USERNAME = \"admin\" PASSWORD = \"password\" SSL_VERIFY = False # Ignore SSL for now def get_json(url): # Performs a GET using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print \"Error: \" + jsn['error']['message'] else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print \"No results found\" return None def display_all_results(url): results = get_results(url) if results: print json.dumps(results, indent=4, sort_keys=True) def display_info_for_hosts(url): hosts = get_results(url) if hosts: for host in hosts: print \"ID: %-10d Name: %-30s IP: %-20s OS: %-30s\" % (host['id'], host['name'], host['ip'], host['operatingsystem_name']) def main(): host = 'satellite.example.com' print \"Displaying all info for host %s ...\" % host display_all_results(SAT_API + 'hosts/' + host) print \"Displaying all facts for host %s ...\" % host display_all_results(SAT_API + 'hosts/%s/facts' % host) host_pattern = 'example' print \"Displaying basic info for hosts matching pattern '%s'...\" % host_pattern display_info_for_hosts(SAT_API + 'hosts?search=' + host_pattern) environment = 'production' print \"Displaying basic info for hosts in environment %s...\" % environment display_info_for_hosts(SAT_API + 'hosts?search=environment=' + environment) model = 'RHEV Hypervisor' print \"Displaying basic info for hosts with model name %s...\" % model display_info_for_hosts(SAT_API + 'hosts?search=model=\"' + model + '\"') if __name__ == \"__main__\": main()", "#!/usr/bin/env python3 import json import sys try: import requests except ImportError: print(\"Please install the python-requests module.\") sys.exit(-1) SAT = \"satellite.example.com\" URL for the API to your deployed Satellite 6 server SAT_API = f\"https://{SAT}/api/\" KATELLO_API = f\"https://{SAT}/katello/api/v2/\" POST_HEADERS = {'content-type': 'application/json'} Default credentials to login to Satellite 6 USERNAME = \"admin\" PASSWORD = \"password\" Ignore SSL for now SSL_VERIFY = False #SSL_VERIFY = \"./path/to/CA-certificate.crt\" # Put the path to your CA certificate here to allow SSL_VERIFY def get_json(url): # Performs a GET using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print(\"Error: \" + jsn['error']['message']) else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print(\"No results found\") return None def display_all_results(url): results = get_results(url) if results: print(json.dumps(results, indent=4, sort_keys=True)) def display_info_for_hosts(url): hosts = get_results(url) if hosts: print(f\"{'ID':10}{'Name':40}{'IP':30}{'Operating System':30}\") for host in hosts: print(f\"{str(host['id']):10}{host['name']:40}{str(host['ip']):30}{str(host['operatingsystem_name']):30}\") def display_info_for_subs(url): subs = get_results(url) if subs: print(f\"{'ID':10}{'Name':90}{'Start Date':30}\") for sub in subs: print(f\"{str(sub['id']):10}{sub['name']:90}{str(sub['start_date']):30}\") def main(): host = SAT print(f\"Displaying all info for host {host} ...\") display_all_results(SAT_API + 'hosts/' + host) print(f\"Displaying all facts for host {host} ...\") display_all_results(SAT_API + f'hosts/{host}/facts') host_pattern = 'example' print(f\"Displaying basic info for hosts matching pattern '{host_pattern}'...\") display_info_for_hosts(SAT_API + 'hosts?per_page=1&search=name~' + host_pattern) print(f\"Displaying basic info for subscriptions\") display_info_for_subs(KATELLO_API + 'subscriptions') environment = 'production' print(f\"Displaying basic info for hosts in environment {environment}...\") display_info_for_hosts(SAT_API + 'hosts?search=environment=' + environment) if __name__ == \"__main__\": main()" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/api_guide/chap-red_hat_satellite-api_guide-api_requests_in_different_languages
9.6. Starting and Stopping NFS
9.6. Starting and Stopping NFS To run an NFS server, the rpcbind [3] service must be running. To verify that rpcbind is active, use the following command: If the rpcbind service is running, then the nfs service can be started. To start an NFS server, use the following command: nfslock must also be started for both the NFS client and server to function properly. To start NFS locking, use the following command: If NFS is set to start at boot, ensure that nfslock also starts by running chkconfig --list nfslock . If nfslock is not set to on , this implies that you will need to manually run the service nfslock start each time the computer starts. To set nfslock to automatically start on boot, use chkconfig nfslock on . nfslock is only needed for NFSv2 and NFSv3. To stop the server, use: The restart option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. To restart the server type: The condrestart ( conditional restart ) option only starts nfs if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. To conditionally restart the server type: To reload the NFS server configuration file without restarting the service type:
[ "service rpcbind status", "service nfs start", "service nfslock start", "service nfs stop", "service nfs restart", "service nfs condrestart", "service nfs reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-nfs-start
8.2.4. Installing Packages
8.2.4. Installing Packages Yum allows you to install both a single package and multiple packages, as well as a package group of your choice. Installing Individual Packages To install a single package and all of its non-installed dependencies, enter a command in the following form: yum install package_name You can also install multiple packages simultaneously by appending their names as arguments: yum install package_name package_name If you are installing packages on a multilib system, such as an AMD64 or Intel 64 machine, you can specify the architecture of the package (as long as it is available in an enabled repository) by appending .arch to the package name. For example, to install the sqlite package for i686 , type: You can use glob expressions to quickly install multiple similarly-named packages: In addition to package names and glob expressions, you can also provide file names to yum install . If you know the name of the binary you want to install, but not its package name, you can give yum install the path name: yum then searches through its package lists, finds the package which provides /usr/sbin/named , if any, and prompts you as to whether you want to install it. Note If you know you want to install the package that contains the named binary, but you do not know in which bin or sbin directory is the file installed, use the yum provides command with a glob expression: yum provides "*/ file_name " is a common and useful trick to find the package(s) that contain file_name . Installing a Package Group A package group is similar to a package: it is not useful by itself, but installing one pulls a group of dependent packages that serve a common purpose. A package group has a name and a groupid . The yum grouplist -v command lists the names of all package groups, and, to each of them, their groupid in parentheses. The groupid is always the term in the last pair of parentheses, such as kde-desktop in the following example: You can install a package group by passing its full group name (without the groupid part) to groupinstall : yum groupinstall group_name You can also install by groupid: yum groupinstall groupid You can even pass the groupid (or quoted name) to the install command if you prepend it with an @ -symbol (which tells yum that you want to perform a groupinstall ): yum install @ group For example, the following are alternative but equivalent ways of installing the KDE Desktop group:
[ "~]# yum install sqlite.i686", "~]# yum install perl-Crypt-\\*", "~]# yum install /usr/sbin/named", "~]# yum provides \"*bin/named\" Loaded plugins: product-id, refresh-packagekit, subscription-manager Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 32:bind-9.7.0-4.P1.el6.x86_64 : The Berkeley Internet Name Domain (BIND) : DNS (Domain Name System) server Repo : rhel Matched from: Filename : /usr/sbin/named", "~]# yum -v grouplist kde\\* Loading \"product-id\" plugin Loading \"refresh-packagekit\" plugin Loading \"subscription-manager\" plugin Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 Config time: 0.123 Yum Version: 3.2.29 Setting up Group Process Looking for repo options for [rhel] rpmdb time: 0.001 group time: 1.291 Available Groups: KDE Desktop (kde-desktop) Done", "~]# yum groupinstall \"KDE Desktop\" ~]# yum groupinstall kde-desktop ~]# yum install @kde-desktop" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Installing
Chapter 4. Supported Configurations
Chapter 4. Supported Configurations 4.1. Supported configurations For supported hardware and software configurations, see the Red Hat JBoss Data Grid Supported Configurations reference on the Customer Portal at https://access.redhat.com/site/articles/115883 . Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/chap-supported_configurations
Chapter 4. Configuring Compute service storage
Chapter 4. Configuring Compute service storage You create an instance from a base image, which the Compute service copies from the Image (glance) service, and caches locally on the Compute nodes. The instance disk, which is the back end for the instance, is also based on the base image. You can configure the Compute service to store ephemeral instance disk data locally on the host Compute node or remotely on either an NFS share or Ceph cluster. Alternatively, you can also configure the Compute service to store instance disk data in persistent storage provided by the Block Storage (Cinder) service. You can configure image caching for your environment, and configure the performance and security of the instance disks. You can also configure the Compute service to download images directly from the RBD image repository without using the Image service API, when the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end. 4.1. Configuration options for image caching Use the parameters detailed in the following table to configure how the Compute service implements and manages an image cache on Compute nodes. Table 4.1. Compute (nova) service image cache parameters Configuration method Parameter Description Puppet nova::compute::image_cache::manager_interval Specifies the number of seconds to wait between runs of the image cache manager, which manages base image caching on Compute nodes. The Compute service uses this period to perform automatic removal of unused cached images when nova::compute::image_cache::remove_unused_base_images is set to True . Set to 0 to run at the default metrics interval of 60 seconds (not recommended). Set to -1 to disable the image cache manager. Default: 2400 Puppet nova::compute::image_cache::precache_concurrency Specifies the maximum number of Compute nodes that can pre-cache images in parallel. Note Setting this parameter to a high number can cause slower pre-cache performance and might result in a DDoS on the Image service. Setting this parameter to a low number reduces the load on the Image service, but can cause longer runtime to completion as the pre-cache is performed as a more sequential operation. Default: 1 Puppet nova::compute::image_cache::remove_unused_base_images Set to True to automatically remove unused base images from the cache at intervals configured by using manager_interval . Images are defined as unused if they have not been accessed during the time specified by using NovaImageCacheTTL . Default: True Puppet nova::compute::image_cache::remove_unused_resized_minimum_age_seconds Specifies the minimum age that an unused resized base image must be to be removed from the cache, in seconds. Unused resized base images younger than this will not be removed. Set to undef to disable. Default: 3600 Puppet nova::compute::image_cache::subdirectory_name Specifies the name of the folder where cached images are stored, relative to USDinstances_path . Default: _base Heat NovaImageCacheTTL Specifies the length of time in seconds that the Compute service should continue caching an image when it is no longer used by any instances on the Compute node. The Compute service deletes images cached on the Compute node that are older than this configured lifetime from the cache directory until they are needed again. Default: 86400 (24 hours) 4.2. Configuration options for instance ephemeral storage properties Use the parameters detailed in the following table to configure the performance and security of ephemeral storage used by instances. Note Red Hat OpenStack Platform (RHOSP) does not support the LVM image type for instance disks. Therefore, the [libvirt]/volume_clear configuration option, which wipes ephemeral disks when instances are deleted, is not supported because it only applies when the instance disk image type is LVM. Table 4.2. Compute (nova) service instance ephemeral storage parameters Configuration method Parameter Description Puppet nova::compute::default_ephemeral_format Specifies the default format that is used for a new ephemeral volume. Set to one of the following valid values: ext2 ext3 ext4 The ext4 format provides much faster initialization times than ext3 for new, large disks. Default: ext4 Puppet nova::compute::force_raw_images Set to True to convert non-raw cached base images to raw format. The raw image format uses more space than other image formats, such as qcow2. Non-raw image formats use more CPU for compression. When set to False , the Compute service removes any compression from the base image during compression to avoid CPU bottlenecks. Set to False if you have a system with slow I/O or low available space to reduce input bandwidth. Default: True Puppet nova::compute::use_cow_images Set to True to use CoW (Copy on Write) images in qcow2 format for instance disks. With CoW, depending on the backing store and host caching, there might be better concurrency achieved by having each instance operate on its own copy. Set to False to use the raw format. Raw format uses more space for common parts of the disk image. Default: True Puppet nova::compute::libvirt::preallocate_images Specifies the preallocation mode for instance disks. Set to one of the following valid values: none - No storage is provisioned at instance start. space - The Compute service fully allocates storage at instance start by running fallocate(1) on the instance disk images. This reduces CPU overhead and file fragmentation, improves I/O performance, and helps guarantee the required disk space. Default: none Hieradata override DEFAULT/resize_fs_using_block_device Set to True to enable direct resizing of the base image by accessing the image over a block device. This is only necessary for images with older versions of cloud-init that cannot resize themselves. This parameter is not enabled by default because it enables the direct mounting of images which might otherwise be disabled for security reasons. Default: False Hieradata override [libvirt]/images_type Specifies the image type to use for instance disks. Set to one of the following valid values: raw qcow2 flat rbd default Note RHOSP does not support the LVM image type for instance disks. When set to a valid value other than default the image type supersedes the configuration of use_cow_images . If default is specified, the configuration of use_cow_images determines the image type: If use_cow_images is set to True (default) then the image type is qcow2 . If use_cow_images is set to False then the image type is Flat . The default value is determined by the configuration of NovaEnableRbdBackend : NovaEnableRbdBackend: False Default: default NovaEnableRbdBackend: True Default: rbd 4.3. Configuring shared instance storage By default, when you launch an instance, the instance disk is stored as a file in the instance directory, /var/lib/nova/instances . You can configure an NFS storage backend for the Compute service to store these instance files on shared NFS storage. Prerequisites You must be using NFSv4 or later. Red Hat OpenStack Platform (RHOSP) does not support earlier versions of NFS. For more information, see the Red Hat Knowledgebase solution RHOS NFSv4-Only Support Notes . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create an environment file to configure shared instance storage, for example, nfs_instance_disk_backend.yaml . To configure an NFS backend for instance files, add the following configuration to nfs_instance_disk_backend.yaml : Replace <nfs_share> with the NFS share directory to mount for instance file storage, for example, '192.168.122.1:/export/nova' or '192.168.24.1:/var/nfs' . If using IPv6, use both double and single-quotes, e.g. "'[fdd0::1]:/export/nova'" . Optional: The default mount SELinux context for NFS storage when NFS backend storage is enabled is 'context=system_u:object_r:nfs_t:s0' . Add the following parameter to amend the mount options for the NFS instance file storage mount point: parameter_defaults: ... NovaNfsOptions: 'context=system_u:object_r:nfs_t:s0,<additional_nfs_mount_options>' Replace <additional_nfs_mount_options> with a comma-separated list of the mount options you want to use for NFS instance file storage. For more information on the available mount options, see the mount man page: Save the updates to your environment file. Add your new environment file to the stack with your other environment files and deploy the overcloud: 4.4. Configuring image downloads directly from Red Hat Ceph RADOS Block Device (RBD) When the Image service (glance) uses Red Hat Ceph RADOS Block Device (RBD) as the back end, and the Compute service uses local file-based ephemeral storage, you can configure the Compute service to download images directly from the RBD image repository without using the Image service API. This reduces the time it takes to download an image to the Compute node image cache at instance boot time, which improves instance launch time. Prerequisites The Image service back end is a Red Hat Ceph RADOS Block Device (RBD). The Compute service is using a local file-based ephemeral store for the image cache and instance disks. Procedure Log in to the undercloud as the stack user. Open your Compute environment file. To download images directly from the RBD back end, add the following configuration to your Compute environment file: Optional: If the Image service is configured to use multiple Red Hat Ceph Storage back ends, add the following configuration to your Compute environment file to identify the RBD back end to download images from: Replace <rbd_backend_id> with the ID used to specify the back end in the GlanceMultistoreConfig configuration, for example rbd2_store . Add the following configuration to your Compute environment file to specify the Image service RBD back end, and the maximum length of time that the Compute service waits to connect to the Image service RBD back end, in seconds: Add your Compute environment file to the stack with your other environment files and deploy the overcloud: To verify that the Compute service downloads images directly from RBD, create an instance then check the instance debug log for the entry "Attempting to export RBD image:". 4.5. Additional resources Configuring the Compute service (nova)
[ "[stack@director ~]USD source ~/stackrc", "parameter_defaults: NovaNfsEnabled: True NovaNfsShare: <nfs_share>", "parameter_defaults: NovaNfsOptions: 'context=system_u:object_r:nfs_t:s0,<additional_nfs_mount_options>'", "man 8 mount.", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/nfs_instance_disk_backend.yaml", "parameter_defaults: ComputeParameters: NovaGlanceEnableRbdDownload: True NovaEnableRbdBackend: False", "parameter_defaults: ComputeParameters: NovaGlanceEnableRbdDownload: True NovaEnableRbdBackend: False NovaGlanceRbdDownloadMultistoreID: <rbd_backend_id>", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: glance/rbd_user: value: 'glance' glance/rbd_pool: value: 'images' glance/rbd_ceph_conf: value: '/etc/ceph/ceph.conf' glance/rbd_connect_timeout: value: '5'", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-compute-service-storage_compute-performance
Chapter 12. Network Observability CLI
Chapter 12. Network Observability CLI 12.1. Installing the Network Observability CLI The Network Observability CLI ( oc netobserv ) is deployed separately from the Network Observability Operator. The CLI is available as an OpenShift CLI ( oc ) plugin. It provides a lightweight way to quickly debug and troubleshoot with network observability. 12.1.1. About the Network Observability CLI You can quickly debug and troubleshoot networking issues by using the Network Observability CLI ( oc netobserv ). The Network Observability CLI is a flow and packet visualization tool that relies on eBPF agents to stream collected data to an ephemeral collector pod. It requires no persistent storage during the capture. After the run, the output is transferred to your local machine. This enables quick, live insight into packets and flow data without installing the Network Observability Operator. Important CLI capture is meant to run only for short durations, such as 8-10 minutes. If it runs for too long, it can be difficult to delete the running process. 12.1.2. Installing the Network Observability CLI Installing the Network Observability CLI ( oc netobserv ) is a separate procedure from the Network Observability Operator installation. This means that, even if you have the Operator installed from OperatorHub, you need to install the CLI separately. Note You can optionally use Krew to install the netobserv CLI plugin. For more information, see "Installing a CLI plugin with Krew". Prerequisites You must install the OpenShift CLI ( oc ). You must have a macOS or Linux operating system. Procedure Download the oc netobserv file that corresponds with your architecture. For example, for the amd64 archive: USD curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64 Make the file executable: USD chmod +x ./oc-netobserv-amd64 Move the extracted netobserv-cli binary to a directory that is on your PATH , such as /usr/local/bin/ : USD sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv Verification Verify that oc netobserv is available: USD oc netobserv version Example output Netobserv CLI version <version> Additional resources Installing and using CLI plugins Installing a CLI plugin with Krew 12.2. Using the Network Observability CLI You can visualize and filter the flows and packets data directly in the terminal to see specific usage, such as identifying who is using a specific port. The Network Observability CLI collects flows as JSON and database files or packets as a PCAP file, which you can use with third-party tools. 12.2.1. Capturing flows You can capture flows and filter on any resource or zone in the data to solve use cases, such as displaying Round-Trip Time (RTT) between two zones. Table visualization in the CLI provides viewing and flow search capabilities. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture flows with filters enabled by running the following command: USD oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to further refine the incoming flows. For example: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . The data that was captured is written to two separate files in an ./output directory located in the same path used to install the CLI. View the captured data in the ./output/flow/<capture_date_time>.json JSON file, which contains JSON arrays of the captured data. Example JSON file { "AgentIP": "10.0.1.76", "Bytes": 561, "DnsErrno": 0, "Dscp": 20, "DstAddr": "f904:ece9:ba63:6ac7:8018:1e5:7130:0", "DstMac": "0A:58:0A:80:00:37", "DstPort": 9999, "Duplicate": false, "Etype": 2048, "Flags": 16, "FlowDirection": 0, "IfDirection": 0, "Interface": "ens5", "K8S_FlowLayer": "infra", "Packets": 1, "Proto": 6, "SrcAddr": "3e06:6c10:6440:2:a80:37:b756:270f", "SrcMac": "0A:58:0A:80:00:01", "SrcPort": 46934, "TimeFlowEndMs": 1709741962111, "TimeFlowRttNs": 121000, "TimeFlowStartMs": 1709741962111, "TimeReceived": 1709741964 } You can use SQLite to inspect the ./output/flow/<capture_date_time>.db database file. For example: Open the file by running the following command: USD sqlite3 ./output/flow/<capture_date_time>.db Query the data by running a SQLite SELECT statement, for example: sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10; Example output 12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1 12.2.2. Capturing packets You can capture packets using the Network Observability CLI. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Run the packet capture with filters enabled: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to refine the incoming packets. An example filter is as follows: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . View the captured data, which is written to a single file in an ./output/pcap directory located in the same path that was used to install the CLI: The ./output/pcap/<capture_date_time>.pcap file can be opened with Wireshark. 12.2.3. Capturing metrics You can generate on-demand dashboards in Prometheus by using a service monitor for Network Observability. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture metrics with filters enabled by running the following command: Example output USD oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Open the link provided in the terminal to view the NetObserv / On-Demand dashboard: Example URL https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli Note Features that are not enabled present as empty graphs. 12.2.4. Cleaning the Network Observability CLI You can manually clean the CLI workload by running oc netobserv cleanup . This command removes all the CLI components from your cluster. When you end a capture, this command is run automatically by the client. You might be required to manually run it if you experience connectivity issues. Procedure Run the following command: USD oc netobserv cleanup Additional resources Network Observability CLI reference 12.3. Network Observability CLI (oc netobserv) reference The Network Observability CLI ( oc netobserv ) has most features and filtering options that are available for the Network Observability Operator. You can pass command line arguments to enable features or filtering options. 12.3.1. Network Observability CLI usage You can use the Network Observability CLI ( oc netobserv ) to pass command line arguments to capture flows data, packets data, and metrics for further analysis and enable features supported by the Network Observability Operator. 12.3.1.1. Syntax The basic syntax for oc netobserv commands: oc netobserv syntax USD oc netobserv [<command>] [<feature_option>] [<command_options>] 1 1 1 Feature options can only be used with the oc netobserv flows command. They cannot be used with the oc netobserv packets command. 12.3.1.2. Basic commands Table 12.1. Basic commands Command Description flows Capture flows information. For subcommands, see the "Flows capture options" table. packets Capture packets data. For subcommands, see the "Packets capture options" table. metrics Capture metrics data. For subcommands, see the "Metrics capture options" table. follow Follow collector logs when running in background. stop Stop collection by removing agent daemonset. copy Copy collector generated files locally. cleanup Remove the Network Observability CLI components. version Print the software version. help Show help. 12.3.1.3. Flows capture options Flows capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering. oc netobserv flows syntax USD oc netobserv flows [<feature_option>] [<command_options>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled: USD oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.4. Packets capture options You can filter packets capture data the as same as flows capture by using the filters. Certain features, such as packets drop, DNS, RTT, and network events, are only available for flows and metrics capture. oc netobserv packets syntax USD oc netobserv packets [<option>] Option Description Default --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - Example running packets capture on TCP protocol and port 49051: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.5. Metrics capture options You can enable features and use filters on metrics capture, the same as flows capture. The generated graphs fill accordingly in the dashboard. oc netobserv metrics syntax USD oc netobserv metrics [<option>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running metrics capture for TCP drops USD oc netobserv metrics --enable_pkt_drop --protocol=TCP
[ "curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64", "chmod +x ./oc-netobserv-amd64", "sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv", "oc netobserv version", "Netobserv CLI version <version>", "oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }", "sqlite3 ./output/flow/<capture_date_time>.db", "sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;", "12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli", "oc netobserv cleanup", "oc netobserv [<command>] [<feature_option>] [<command_options>] 1", "oc netobserv flows [<feature_option>] [<command_options>]", "oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv packets [<option>]", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv metrics [<option>]", "oc netobserv metrics --enable_pkt_drop --protocol=TCP" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_observability/network-observability-cli-1
Chapter 8. Deployment [apps/v1]
Chapter 8. Deployment [apps/v1] Description Deployment enables declarative updates for Pods and ReplicaSets. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DeploymentSpec is the specification of the desired behavior of the Deployment. status object DeploymentStatus is the most recently observed status of the Deployment. 8.1.1. .spec Description DeploymentSpec is the specification of the desired behavior of the Deployment. Type object Required selector template Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Indicates that the deployment is paused. progressDeadlineSeconds integer The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas integer Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit integer The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector LabelSelector Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy object DeploymentStrategy describes how to replace existing pods with new ones. template PodTemplateSpec Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". 8.1.2. .spec.strategy Description DeploymentStrategy describes how to replace existing pods with new ones. Type object Property Type Description rollingUpdate object Spec to control the desired behavior of rolling update. type string Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. Possible enum values: - "Recreate" Kill all existing pods before creating new ones. - "RollingUpdate" Replace the old ReplicaSets by new one using rolling update i.e gradually scale down the old ReplicaSets and scale up the new one. 8.1.3. .spec.strategy.rollingUpdate Description Spec to control the desired behavior of rolling update. Type object Property Type Description maxSurge IntOrString The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable IntOrString The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 8.1.4. .status Description DeploymentStatus is the most recently observed status of the Deployment. Type object Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this deployment. collisionCount integer Count of hash collisions for the Deployment. The Deployment controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ReplicaSet. conditions array Represents the latest available observations of a deployment's current state. conditions[] object DeploymentCondition describes the state of a deployment at a certain point. observedGeneration integer The generation observed by the deployment controller. readyReplicas integer readyReplicas is the number of pods targeted by this Deployment with a Ready Condition. replicas integer Total number of non-terminated pods targeted by this deployment (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this deployment. This is the total number of pods that are still required for the deployment to have 100% available capacity. They may either be pods that are running but not yet available or pods that still have not been created. updatedReplicas integer Total number of non-terminated pods targeted by this deployment that have the desired template spec. 8.1.5. .status.conditions Description Represents the latest available observations of a deployment's current state. Type array 8.1.6. .status.conditions[] Description DeploymentCondition describes the state of a deployment at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of deployment condition. 8.2. API endpoints The following API endpoints are available: /apis/apps/v1/deployments GET : list or watch objects of kind Deployment /apis/apps/v1/watch/deployments GET : watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/deployments DELETE : delete collection of Deployment GET : list or watch objects of kind Deployment POST : create a Deployment /apis/apps/v1/watch/namespaces/{namespace}/deployments GET : watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/deployments/{name} DELETE : delete a Deployment GET : read the specified Deployment PATCH : partially update the specified Deployment PUT : replace the specified Deployment /apis/apps/v1/watch/namespaces/{namespace}/deployments/{name} GET : watch changes to an object of kind Deployment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status GET : read status of the specified Deployment PATCH : partially update status of the specified Deployment PUT : replace status of the specified Deployment 8.2.1. /apis/apps/v1/deployments HTTP method GET Description list or watch objects of kind Deployment Table 8.1. HTTP responses HTTP code Reponse body 200 - OK DeploymentList schema 401 - Unauthorized Empty 8.2.2. /apis/apps/v1/watch/deployments HTTP method GET Description watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. Table 8.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /apis/apps/v1/namespaces/{namespace}/deployments HTTP method DELETE Description delete collection of Deployment Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Deployment Table 8.5. HTTP responses HTTP code Reponse body 200 - OK DeploymentList schema 401 - Unauthorized Empty HTTP method POST Description create a Deployment Table 8.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.7. Body parameters Parameter Type Description body Deployment schema Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 202 - Accepted Deployment schema 401 - Unauthorized Empty 8.2.4. /apis/apps/v1/watch/namespaces/{namespace}/deployments HTTP method GET Description watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. Table 8.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /apis/apps/v1/namespaces/{namespace}/deployments/{name} Table 8.10. Global path parameters Parameter Type Description name string name of the Deployment HTTP method DELETE Description delete a Deployment Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Deployment Table 8.13. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Deployment Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.15. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Deployment Table 8.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.17. Body parameters Parameter Type Description body Deployment schema Table 8.18. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty 8.2.6. /apis/apps/v1/watch/namespaces/{namespace}/deployments/{name} Table 8.19. Global path parameters Parameter Type Description name string name of the Deployment HTTP method GET Description watch changes to an object of kind Deployment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.7. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status Table 8.21. Global path parameters Parameter Type Description name string name of the Deployment HTTP method GET Description read status of the specified Deployment Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Deployment Table 8.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.24. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Deployment Table 8.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.26. Body parameters Parameter Type Description body Deployment schema Table 8.27. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/workloads_apis/deployment-apps-v1
Chapter 4. Installation configuration parameters for IBM Power
Chapter 4. Installation configuration parameters for IBM Power Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 4.1. Available installation configuration parameters for IBM Power The following tables specify the required, optional, and IBM Power-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power/installation-config-parameters-ibm-power
F.3. Websocket Proxy
F.3. Websocket Proxy F.3.1. Websocket Proxy Overview The websocket proxy allows users to connect to virtual machines via a noVNC console. The websocket proxy can be installed and configured on the Red Hat Virtualization Manager machine during the initial configuration (see Configuring the Red Hat Virtualization Manager ).
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/sect-websocket_proxy
39.2. Examples for Using ipa migrate-ds
39.2. Examples for Using ipa migrate-ds The data migration is performed using the ipa migrate-ds command. At its simplest, the command takes the LDAP URL of the directory to migrate and exports the data based on common default settings. Migrated entries The migrate-ds command only migrates accounts containing a gidNumber attribute, that is required by the posixAccount object class, and a sn attribute, that is required by the person object class. Customizing the process The ipa migrate-ds command enables you to customize how data is identified and exported. This is useful if the original directory tree has a unique structure or if some entries or attributes within entries should be excluded. For further details, pass the --help to the command. Bind DN By default, the DN " cn=Directory Manager " is used to bind to the remote LDAP directory. Pass the --bind-dn option to the command to specify a custom bind DN. For further information, see Section 39.1.3.5, "Migration Tools" . Naming context changes If the Directory Server naming context differs from the one used in Identity Management, the base DNs for objects is transformed. For example: uid= user ,ou=people,dc=ldap,dc=example,dc=com is migrated to uid= user ,ou=people,dc=idm,dc=example,dc=com . Pass the --base-dn to the ipa migrate-ds command to set the base DN used on the remote LDAP server for the migration. 39.2.1. Migrating Specific Subtrees The default directory structure places person entries in the ou=People subtree and group entries in the ou=Groups subtree. These subtrees are container entries for those different types of directory data. If no options are passed with the migrate-ds command, then the utility assumes that the given LDAP directory uses the ou=People and ou=Groups structure. Many deployments may have an entirely different directory structure (or may only want to export certain parts of the directory tree). There are two options which allow administrators to specify the RDN of a different user or group subtree on the source LDAP server: --user-container --group-container Note In both cases, the subtree must be the RDN only and must be relative to the base DN. For example, the >ou=Employees,dc=example,dc=com directory tree can be migrated using --user-container=ou=Employees . For example: Pass the --scope option to the ipa migrate-ds command, to set a scope: onelevel : Default. Only entries in the specified container are migrated. subtree : Entries in the specified container and all subcontainers are migrated. base : Only the specified object itself is migrated. 39.2.2. Specifically Including or Excluding Entries By default, the ipa migrate-ds script imports every user entry with the person object class and every group entry with the groupOfUniqueNames or groupOfNames object class.. In some migration paths, only specific types of users and groups may need to be exported, or, conversely, specific users and groups may need to be excluded. One option is to set positively which types of users and groups to include. This is done by setting which object classes to search for when looking for user or group entries. This is a really useful option when there are custom object classes used in an environment for different user types. For example, this migrates only users with the custom fullTimeEmployee object class: Because of the different types of groups, this is also very useful for migrating only certain types of groups (such as user groups) while excluding other types of groups, like certificate groups. For example: Positively specifying user and groups to migrate based on object class implicitly excludes all other users and groups from migration. Alternatively, it can be useful to migrate all user and group entries except for just a small handful of entries. Specific user or group accounts can be excluded while all others of that type are migrated. For example, this excludes a hobbies group and two users: Exclude statements are applied to users matching the pattern in the uid and to groups matching it in the cn attribute. Specifying an object class to migrate can be used together with excluding specific entries. For example, this specifically includes users with the fullTimeEmployee object class, yet excludes three managers: 39.2.3. Excluding Entry Attributes By default, every attribute and object class for a user or group entry is migrated. There are some cases where that may not be realistic, either because of bandwidth and network constraints or because the attribute data are no longer relevant. For example, if users are going to be assigned new user certificates as they join the IdM domain, then there is no reason to migrate the userCertificate attribute. Specific object classes and attributes can be ignored by the migrate-ds by using any of several different options: --user-ignore-objectclass --user-ignore-attribute --group-ignore-objectclass --group-ignore-attribute For example, to exclude the userCertificate attribute and strongAuthenticationUser object class for users and the groupOfCertificates object class for groups: Note Make sure not to ignore any required attributes. Also, when excluding object classes, make sure to exclude any attributes which are only supported by that object class. 39.2.4. Setting the Schema to Use Identity Management uses the RFC2307bis schema to define user, host, host group, and other network identities. However, if the LDAP server used as source for a migration uses the RFC2307 schema instead, pass the --schema option to the ipa migrate-ds command:
[ "ipa migrate-ds ldap://ldap.example.com:389", "ipa migrate-ds --user-container=ou=employees --group-container=\"ou=employee groups\" ldap://ldap.example.com:389", "ipa migrate-ds --user-objectclass=fullTimeEmployee ldap://ldap.example.com:389", "ipa migrate-ds --group-objectclass=groupOfNames --group-objectclass=groupOfUniqueNames ldap://ldap.example.com:389", "ipa migrate-ds --exclude-groups=\"Golfers Group\" --exclude-users=jsmith --exclude-users=bjensen ldap://ldap.example.com:389", "ipa migrate-ds --user-objectclass=fullTimeEmployee --exclude-users=jsmith --exclude-users=bjensen --exclude-users=mreynolds ldap://ldap.example.com:389", "ipa migrate-ds --user-ignore-attribute=userCertificate --user-ignore-objectclass=strongAuthenticationUser --group-ignore-objectclass=groupOfCertificates ldap://ldap.example.com:389", "ipa migrate-ds --schema=RFC2307 ldap://ldap.example.com:389" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/using-migrate-ds
Chapter 14. Using Cruise Control for cluster rebalancing
Chapter 14. Using Cruise Control for cluster rebalancing Cruise Control is an open source system for automating Kafka operations, such as monitoring cluster workload, rebalancing a cluster based on predefined constraints, and detecting and fixing anomalies. It consists of four main components- the Load Monitor, the Analyzer, the Anomaly Detector, and the Executor- and a REST API for client interactions. You can use Cruise Control to rebalance a Kafka cluster. Cruise Control for AMQ Streams on Red Hat Enterprise Linux is provided as a separate zipped distribution. AMQ Streams utilizes the REST API to support the following Cruise Control features: Generating optimization proposals from optimization goals. Rebalancing a Kafka cluster based on an optimization proposal. Optimization goals An optimization goal describes a specific objective to achieve from a rebalance. For example, a goal might be to distribute topic replicas across brokers more evenly. You can change what goals to include through configuration. A goal is defined as a hard goal or soft goal. You can add hard goals through Cruise Control deployment configuration. You also have main, default, and user-provided goals that fit into each of these categories. Hard goals are preset and must be satisfied for an optimization proposal to be successful. Soft goals do not need to be satisfied for an optimization proposal to be successful. They can be set aside if it means that all hard goals are met. Main goals are inherited from Cruise Control. Some are preset as hard goals. Main goals are used in optimization proposals by default. Default goals are the same as the main goals by default. You can specify your own set of default goals. User-provided goals are a subset of default goals that are configured for generating a specific optimization proposal. Optimization proposals Optimization proposals comprise the goals you want to achieve from a rebalance. You generate an optimization proposal to create a summary of proposed changes and the results that are possible with the rebalance. The goals are assessed in a specific order of priority. You can then choose to approve or reject the proposal. You can reject the proposal to run it again with an adjusted set of goals. You can generate and approve an optimization proposal by making a request to one of the following API endpoints. /rebalance endpoint to run a full rebalance. /add_broker endpoint to rebalance after adding brokers when scaling up a Kafka cluster. /remove_broker endpoint to rebalance before removing brokers when scaling down a Kafka cluster. You configure optimization goals through a configuration properties file. AMQ Streams provides example properties files for Cruise Control. Other Cruise Control features are not currently supported, including self healing, notifications, write-your-own goals, and changing the topic replication factor. 14.1. Cruise Control components and features Cruise Control consists of four main components- the Load Monitor, the Analyzer, the Anomaly Detector, and the Executor- and a REST API for client interactions. AMQ Streams utilizes the REST API to support the following Cruise Control features: Generating optimization proposals from optimization goals. Rebalancing a Kafka cluster based on an optimization proposal. Optimization goals An optimization goal describes a specific objective to achieve from a rebalance. For example, a goal might be to distribute topic replicas across brokers more evenly. You can change what goals to include through configuration. A goal is defined as a hard goal or soft goal. You can add hard goals through Cruise Control deployment configuration. You also have main, default, and user-provided goals that fit into each of these categories. Hard goals are preset and must be satisfied for an optimization proposal to be successful. Soft goals do not need to be satisfied for an optimization proposal to be successful. They can be set aside if it means that all hard goals are met. Main goals are inherited from Cruise Control. Some are preset as hard goals. Main goals are used in optimization proposals by default. Default goals are the same as the main goals by default. You can specify your own set of default goals. User-provided goals are a subset of default goals that are configured for generating a specific optimization proposal. Optimization proposals Optimization proposals comprise the goals you want to achieve from a rebalance. You generate an optimization proposal to create a summary of proposed changes and the results that are possible with the rebalance. The goals are assessed in a specific order of priority. You can then choose to approve or reject the proposal. You can reject the proposal to run it again with an adjusted set of goals. You can generate an optimization proposal in one of three modes. full is the default mode and runs a full rebalance. add-brokers is the mode you use after adding brokers when scaling up a Kafka cluster. remove-brokers is the mode you use before removing brokers when scaling down a Kafka cluster. Other Cruise Control features are not currently supported, including self healing, notifications, write-your-own goals, and changing the topic replication factor. Additional resources Cruise Control documentation 14.2. Downloading Cruise Control A ZIP file distribution of Cruise Control is available for download from the Red Hat website. You can download the latest version of Red Hat AMQ Streams from the AMQ Streams software downloads page . Procedure Download the latest version of the Red Hat AMQ Streams Cruise Control archive from the Red Hat Customer Portal . Create the /opt/cruise-control directory: sudo mkdir /opt/cruise-control Extract the contents of the Cruise Control ZIP file to the new directory: unzip amq-streams-<version>-cruise-control-bin.zip -d /opt/cruise-control Change the ownership of the /opt/cruise-control directory to the kafka user: sudo chown -R kafka:kafka /opt/cruise-control 14.3. Deploying the Cruise Control Metrics Reporter Before starting Cruise Control, you must configure the Kafka brokers to use the provided Cruise Control Metrics Reporter. The file for the Metrics Reporter is supplied with the AMQ Streams installation artifacts. When loaded at runtime, the Metrics Reporter sends metrics to the __CruiseControlMetrics topic, one of three auto-created topics . Cruise Control uses these metrics to create and update the workload model and to calculate optimization proposals. Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. Kafka and ZooKeeper are running. Procedure For each broker in the Kafka cluster and one at a time: Stop the Kafka broker: /opt/kafka/bin/kafka-server-stop.sh In the Kafka configuration file ( /opt/kafka/config/server.properties ) configure the Cruise Control Metrics Reporter: Add the CruiseControlMetricsReporter class to the metric.reporters configuration option. Do not remove any existing Metrics Reporters. Add the following configuration options and values: These options enable the Cruise Control Metrics Reporter to create the __CruiseControlMetrics topic with a log cleanup policy of DELETE . For more information, see Auto-created topics and Log cleanup policy for Cruise Control Metrics topic . Configure SSL, if required. In the Kafka configuration file ( /opt/kafka/config/server.properties ) configure SSL between the Cruise Control Metrics Reporter and the Kafka broker by setting the relevant client configuration properties. The Metrics Reporter accepts all standard producer-specific configuration properties with the cruise.control.metrics.reporter prefix. For example: cruise.control.metrics.reporter.ssl.truststore.password . In the Cruise Control properties file ( /opt/cruise-control/config/cruisecontrol.properties ) configure SSL between the Kafka broker and the Cruise Control server by setting the relevant client configuration properties. Cruise Control inherits SSL client property options from Kafka and uses those properties for all Cruise Control server clients. Restart the Kafka broker: /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Repeat steps 1-5 for the remaining brokers. 14.4. Configuring and starting Cruise Control Configure the properties used by Cruise Control and then start the Cruise Control server using the kafka-cruise-control-start.sh script. The server is hosted on a single machine for the whole Kafka cluster. Three topics are auto-created when Cruise Control starts. For more information, see Auto-created topics . Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. Section 14.2, "Downloading Cruise Control" Section 14.3, "Deploying the Cruise Control Metrics Reporter" Procedure Edit the Cruise Control properties file ( /opt/cruise-control/config/cruisecontrol.properties ). Configure the properties shown in the following example configuration: # The Kafka cluster to control. bootstrap.servers=localhost:9092 1 # The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 # The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 # The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 # The list of supported goals goals={list of main optimization goals} 5 # The list of supported hard goals hard.goals={List of hard goals} 6 # How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 # The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8 1 Host and port numbers of the Kafka broker (always port 9092). 2 Replication factor of the Kafka metric sample store topic. If you are evaluating Cruise Control in a single-node Kafka and ZooKeeper cluster, set this property to 1. For production use, set this property to 2 or more. 3 The configuration file that sets the maximum capacity limits for broker resources. Use the file that applies to your Kafka deployment configuration. For more information, see Capacity configuration . 4 Comma-separated list of default optimization goals, using fully-qualified domain names (FQDNs). A number of main optimization goals (see 5) are already set as default optimization goals; you can add or remove goals if desired. For more information, see Section 14.5, "Optimization goals overview" . 5 Comma-separated list of main optimization goals, using FQDNs. To completely exclude goals from being used to generate optimization proposals, remove them from the list. For more information, see Section 14.5, "Optimization goals overview" . 6 Comma-separated list of hard goals, using FQDNs. Seven of the main optimization goals are already set as hard goals; you can add or remove goals if desired. For more information, see Section 14.5, "Optimization goals overview" . 7 The interval, in milliseconds, for refreshing the cached optimization proposal that is generated from the default optimization goals. For more information, see Section 14.6, "Optimization proposals overview" . 8 Host and port numbers of the ZooKeeper connection (always port 2181). Start the Cruise Control server. The server starts on port 9092 by default; optionally, specify a different port. cd /opt/cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number> To verify that Cruise Control is running, send a GET request to the /state endpoint of the Cruise Control server: curl 'http://HOST:PORT/kafkacruisecontrol/state' Auto-created topics The following table shows the three topics that are automatically created when Cruise Control starts. These topics are required for Cruise Control to work properly and must not be deleted or changed. Table 14.1. Auto-created topics Auto-created topic Created by Function __CruiseControlMetrics Cruise Control Metrics Reporter Stores the raw metrics from the Metrics Reporter in each Kafka broker. __KafkaCruiseControlPartitionMetricSamples Cruise Control Stores the derived metrics for each partition. These are created by the Metric Sample Aggregator . __KafkaCruiseControlModelTrainingSamples Cruise Control Stores the metrics samples used to create the Cluster Workload Model . To ensure that log compaction is disabled in the auto-created topics, make sure that you configure the Cruise Control Metrics Reporter as described in Section 14.3, "Deploying the Cruise Control Metrics Reporter" . Log compaction can remove records that are needed by Cruise Control and prevent it from working properly. Additional resources Log cleanup policy for Cruise Control Metrics topic 14.5. Optimization goals overview Optimization goals are constraints on workload redistribution and resource utilization across a Kafka cluster. To rebalance a Kafka cluster, Cruise Control uses optimization goals to generate optimization proposals . 14.5.1. Goals order of priority AMQ Streams on Red Hat Enterprise Linux supports all the optimization goals developed in the Cruise Control project. The supported goals, in the default descending order of priority, are as follows: Rack-awareness Minimum number of leader replicas per broker for a set of topics Replica capacity Capacity: Disk capacity, Network inbound capacity, Network outbound capacity CPU capacity Replica distribution Potential network output Resource distribution: Disk utilization distribution, Network inbound utilization distribution, Network outbound utilization distribution Leader bytes-in rate distribution Topic replica distribution CPU usage distribution Leader replica distribution Preferred leader election Kafka Assigner disk usage distribution Intra-broker disk capacity Intra-broker disk usage For more information on each optimization goal, see Goals in the Cruise Control Wiki . 14.5.2. Goals configuration in the Cruise Control properties file You configure optimization goals in the cruisecontrol.properties file in the cruise-control/config/ directory. Cruise Control has configurations for hard optimization goals that must be satisfied, as well as main, default, and user-provided optimization goals. You can specify the following types of optimization goal in the following configuration: Main goals - cruisecontrol.properties file Hard goals - cruisecontrol.properties file Default goals - cruisecontrol.properties file User-provided goals - runtime parameters Optionally, user-provided optimization goals are set at runtime as parameters in requests to the /rebalance endpoint. Optimization goals are subject to any capacity limits on broker resources. 14.5.3. Hard and soft optimization goals Hard goals are goals that must be satisfied in optimization proposals. Goals that are not configured as hard goals are known as soft goals . You can think of soft goals as best effort goals: they do not need to be satisfied in optimization proposals, but are included in optimization calculations. Cruise Control will calculate optimization proposals that satisfy all the hard goals and as many soft goals as possible (in their priority order). An optimization proposal that does not satisfy all the hard goals is rejected by the Analyzer and is not sent to the user. Note For example, you might have a soft goal to distribute a topic's replicas evenly across the cluster (the topic replica distribution goal). Cruise Control will ignore this goal if doing so enables all the configured hard goals to be met. In Cruise Control, the following main optimization goals are preset as hard goals: To change the hard goals, edit the hard.goals property of the cruisecontrol.properties file and specify the goals using their fully-qualified domain names. Increasing the number of hard goals reduces the likelihood that Cruise Control will calculate and generate valid optimization proposals. 14.5.4. Main optimization goals The main optimization goals are available to all users. Goals that are not listed in the main optimization goals are not available for use in Cruise Control operations. The following main optimization goals are preset in the goals property of the cruisecontrol.properties file in descending priority order: To reduce complexity, we recommend that you do not change the preset main optimization goals, unless you need to completely exclude one or more goals from being used to generate optimization proposals. The priority order of the main optimization goals can be modified, if desired, in the configuration for default optimization goals. To modify the preset main optimization goals, specify a list of goals in the goals property in descending priority order. Use fully-qualified domain names as shown in the cruisecontrol.properties file. You must specify at least one main goal, or Cruise Control will crash. Note If you change the preset main optimization goals, you must ensure that the configured hard.goals are a subset of the main optimization goals that you configured. Otherwise, errors will occur when generating optimization proposals. 14.5.5. Default optimization goals Cruise Control uses the default optimization goals list to generate the cached optimization proposal . For more information, see Section 14.6, "Optimization proposals overview" . You can override the default optimization goals at runtime by setting user-provided optimization goals . The following default optimization goals are preset in the default.goals property of the cruisecontrol.properties file in descending priority order: You must specify at least one default goal, or Cruise Control will crash. To modify the default optimization goals, specify a list of goals in the default.goals property in descending priority order. Default goals must be a subset of the main optimization goals; use fully-qualified domain names. 14.5.6. User-provided optimization goals User-provided optimization goals narrow down the configured default goals for a particular optimization proposal. You can set them, as required, as parameters in HTTP requests to the /rebalance endpoint. For more information, see Section 14.9, "Generating optimization proposals" . User-provided optimization goals can generate optimization proposals for different scenarios. For example, you might want to optimize leader replica distribution across the Kafka cluster without considering disk capacity or disk utilization. So, you send a request to the /rebalance endpoint containing a single goal for leader replica distribution. User-provided optimization goals must: Include all configured hard goals , or an error occurs Be a subset of the main optimization goals To ignore the configured hard goals in an optimization proposal, add the skip_hard_goals_check=true parameter to the request. Additional resources Cruise Control configuration Configurations in the Cruise Control Wiki 14.6. Optimization proposals overview An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers. Each optimization proposal is based on the set of optimization goals that was used to generate it, subject to any configured capacity limits on broker resources. All optimization proposals are estimates of the impact of a proposed rebalance. You can approve or reject a proposal. You cannot approve a cluster rebalance without first generating the optimization proposal. You can run the optimization proposal using one of the following endpoints: /rebalance /add_broker /remove_broker 14.6.1. Rebalancing endpoints You specify a rebalancing endpoint when you send a POST request to generate an optimization proposal. /rebalance The /rebalance endpoint runs a full rebalance by moving replicas across all the brokers in the cluster. /add_broker The add_broker endpoint is used after scaling up a Kafka cluster by adding one or more brokers. Normally, after scaling up a Kafka cluster, new brokers are used to host only the partitions of newly created topics. If no new topics are created, the newly added brokers are not used and the existing brokers remain under the same load. By using the add_broker endpoint immediately after adding brokers to the cluster, the rebalancing operation moves replicas from existing brokers to the newly added brokers. You specify the new brokers as a brokerid list in the POST request. /remove_broker The /remove_broker endpoint is used before scaling down a Kafka cluster by removing one or more brokers. If you scale down a Kafka cluster, brokers are shut down even if they host replicas. This can lead to under-replicated partitions and possibly result in some partitions being under their minimum ISR (in-sync replicas). To avoid this potential problem, the /remove_broker endpoint moves replicas off the brokers that are going to be removed. When these brokers are not hosting replicas anymore, you can safely run the scaling down operation. You specify the brokers you're removing as a brokerid list in the POST request. In general, use the /rebalance endpoint to rebalance a Kafka cluster by spreading the load across brokers. Use the /add-broker endpoint and /remove_broker endpoint only if you want to scale your cluster up or down and rebalance the replicas accordingly. The procedure to run a rebalance is actually the same across the three different endpoints. The only difference is with listing brokers that have been added or will be removed to the request. 14.6.2. Approving or rejecting an optimization proposal An optimization proposal summary shows the proposed scope of changes. The summary is returned in a response to a HTTP request through the Cruise Control API. When you make a POST request to the /rebalance endpoint, an optimization proposal summary is returned in the response. Returning an optimization proposal summary curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance' Use the summary to decide whether to approve or reject an optimization proposal. Approving an optimization proposal You approve the optimization proposal by making a POST request to the /rebalance endpoint and setting the dryrun parameter to false (default true ). Cruise Control applies the proposal to the Kafka cluster and starts a cluster rebalance operation. Rejecting an optimization proposal If you choose not to approve an optimization proposal, you can change the optimization goals or update any of the rebalance performance tuning options , and then generate another proposal. You can resend a request without the dryrun parameter to generate a new optimization proposal. Use the optimization proposal to assess the movements required for a rebalance. For example, a summary describes inter-broker and intra-broker movements. Inter-broker rebalancing moves data between separate brokers. Intra-broker rebalancing moves data between disks on the same broker when you are using a JBOD storage configuration. Such information can be useful even if you don't go ahead and approve the proposal. You might reject an optimization proposal, or delay its approval, because of the additional load on a Kafka cluster when rebalancing. In the following example, the proposal suggests the rebalancing of data between separate brokers. The rebalance involves the movement of 55 partition replicas, totaling 12MB of data, across the brokers. Though the inter-broker movement of partition replicas has a high impact on performance, the total amount of data is not large. If the total data was much larger, you could reject the proposal, or time when to approve the rebalance to limit the impact on the performance of the Kafka cluster. Rebalance performance tuning options can help reduce the impact of data movement. If you can extend the rebalance period, you can divide the rebalance into smaller batches. Fewer data movements at a single time reduces the load on the cluster. Example optimization proposal summary Optimization has 55 inter-broker replica (12 MB) moves, 0 intra-broker replica (0 MB) moves and 24 leadership moves with a cluster model of 5 recent windows and 100.000% of the partitions covered. Excluded Topics: []. Excluded Brokers For Leadership: []. Excluded Brokers For Replica Move: []. Counts: 3 brokers 343 replicas 7 topics. On-demand Balancedness Score Before (78.012) After (82.912). Provision Status: RIGHT_SIZED. The proposal will also move 24 partition leaders to different brokers. This requires a change to the ZooKeeper configuration, which has a low impact on performance. The balancedness scores are measurements of the overall balance of the Kafka Cluster before and after the optimization proposal is approved. A balancedness score is based on optimization goals. If all goals are satisfied, the score is 100. The score is reduced for each goal that will not be met. Compare the balancedness scores to see whether the Kafka cluster is less balanced than it could be following a rebalance. The provision status indicates whether the current cluster configuration supports the optimization goals. Check the provision status to see if you should add or remove brokers. Table 14.2. Optimization proposal provision status Status Description RIGHT_SIZED The cluster has an appropriate number of brokers to satisfy the optimization goals. UNDER_PROVISIONED The cluster is under-provisioned and requires more brokers to satisfy the optimization goals. OVER_PROVISIONED The cluster is over-provisioned and requires fewer brokers to satisfy the optimization goals. UNDECIDED The status is not relevant or it has not yet been decided. 14.6.3. Optimization proposal summary properties The following table describes the properties contained in an optimization proposal. Table 14.3. Properties contained in an optimization proposal summary Property Description n inter-broker replica (y MB) moves n : The number of partition replicas that will be moved between separate brokers. Performance impact during rebalance operation : Relatively high. y MB : The sum of the size of each partition replica that will be moved to a separate broker. Performance impact during rebalance operation : Variable. The larger the number of MBs, the longer the cluster rebalance will take to complete. n intra-broker replica (y MB) moves n : The total number of partition replicas that will be transferred between the disks of the cluster's brokers. Performance impact during rebalance operation : Relatively high, but less than inter-broker replica moves . y MB : The sum of the size of each partition replica that will be moved between disks on the same broker. Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. Moving a large amount of data between disks on the same broker has less impact than between separate brokers (see inter-broker replica moves ). n excluded topics The number of topics excluded from the calculation of partition replica/leader movements in the optimization proposal. You can exclude topics in one of the following ways: In the cruisecontrol.properties file, specify a regular expression in the topics.excluded.from.partition.movement property. In a POST request to the /rebalance endpoint, specify a regular expression in the excluded_topics parameter. Topics that match the regular expression are listed in the response and will be excluded from the cluster rebalance. n leadership moves n : The number of partitions whose leaders will be switched to different replicas. This involves a change to ZooKeeper configuration. Performance impact during rebalance operation : Relatively low. n recent windows n : The number of metrics windows upon which the optimization proposal is based. n% of the partitions covered n% : The percentage of partitions in the Kafka cluster covered by the optimization proposal. On-demand Balancedness Score Before (nn.yyy) After (nn.yyy) Measurements of the overall balance of a Kafka Cluster. Cruise Control assigns a Balancedness Score to every optimization goal based on several factors, including priority (the goal's position in the list of default.goals or user-provided goals). The On-demand Balancedness Score is calculated by subtracting the sum of the Balancedness Score of each violated soft goal from 100. The Before score is based on the current configuration of the Kafka cluster. The After score is based on the generated optimization proposal. 14.6.4. Cached optimization proposal Cruise Control maintains a cached optimization proposal based on the configured default optimization goals . Generated from the workload model, the cached optimization proposal is updated every 15 minutes to reflect the current state of the Kafka cluster. The most recent cached optimization proposal is returned when the following goal configurations are used: The default optimization goals User-provided optimization goals that can be met by the current cached proposal To change the cached optimization proposal refresh interval, edit the proposal.expiration.ms setting in the cruisecontrol.properties file. Consider a shorter interval for fast changing clusters, although this increases the load on the Cruise Control server. Additional resources Optimization goals overview Generating optimization proposals Initiating a cluster rebalance 14.7. Rebalance performance tuning overview You can adjust several performance tuning options for cluster rebalances. These options control how partition replica and leadership movements in a rebalance are executed, as well as the bandwidth that is allocated to a rebalance operation. Partition reassignment commands Optimization proposals are composed of separate partition reassignment commands. When you initiate a proposal, the Cruise Control server applies these commands to the Kafka cluster. A partition reassignment command consists of either of the following types of operations: Partition movement : Involves transferring the partition replica and its data to a new location. Partition movements can take one of two forms: Inter-broker movement: The partition replica is moved to a log directory on a different broker. Intra-broker movement: The partition replica is moved to a different log directory on the same broker. Leadership movement : Involves switching the leader of the partition's replicas. Cruise Control issues partition reassignment commands to the Kafka cluster in batches. The performance of the cluster during the rebalance is affected by the number of each type of movement contained in each batch. To configure partition reassignment commands, see Rebalance tuning options . Replica movement strategies Cluster rebalance performance is also influenced by the replica movement strategy that is applied to the batches of partition reassignment commands. By default, Cruise Control uses the BaseReplicaMovementStrategy , which applies the commands in the order in which they were generated. However, if there are some very large partition reassignments early in the proposal, this strategy can slow down the application of the other reassignments. Cruise Control provides three alternative replica movement strategies that can be applied to optimization proposals: PrioritizeSmallReplicaMovementStrategy : Order reassignments in ascending size. PrioritizeLargeReplicaMovementStrategy : Order reassignments in descending size. PostponeUrpReplicaMovementStrategy : Prioritize reassignments for replicas of partitions which have no out-of-sync replicas. These strategies can be configured as a sequence. The first strategy attempts to compare two partition reassignments using its internal logic. If the reassignments are equivalent, then it passes them to the strategy in the sequence to decide the order, and so on. To configure replica movement strategies, see Rebalance tuning options . Rebalance tuning options Cruise Control provides several configuration options for tuning rebalance parameters. These options are set in the following ways: As properties, in the default Cruise Control configuration, in the cruisecontrol.properties file As parameters in POST requests to the /rebalance endpoint The relevant configurations for both methods are summarized in the following table. Table 14.4. Rebalance performance tuning configuration Cruise Control properties KafkaRebalance parameters Default Description num.concurrent.partition.movements.per.broker concurrent_partition_movements_per_broker 5 The maximum number of inter-broker partition movements in each partition reassignment batch num.concurrent.intra.broker.partition.movements concurrent_intra_broker_partition_movements 2 The maximum number of intra-broker partition movements in each partition reassignment batch num.concurrent.leader.movements concurrent_leader_movements 1000 The maximum number of partition leadership changes in each partition reassignment batch default.replication.throttle replication_throttle Null (no limit) The bandwidth (in bytes per second) to assign to partition reassignment default.replica.movement.strategies replica_movement_strategies BaseReplicaMovementStrategy The list of strategies (in priority order) used to determine the order in which partition reassignment commands are executed for generated proposals. There are three strategies: PrioritizeSmallReplicaMovementStrategy , PrioritizeLargeReplicaMovementStrategy , and PostponeUrpReplicaMovementStrategy . For the server setting, use a comma-separated list with the fully qualified names of the strategy class (add com.linkedin.kafka.cruisecontrol.executor.strategy. to the start of each class name). For the rebalance parameters, use a comma-separated list of the class names of the replica movement strategies. Changing the default settings affects the length of time that the rebalance takes to complete, as well as the load placed on the Kafka cluster during the rebalance. Using lower values reduces the load but increases the amount of time taken, and vice versa. Additional resources Configurations in the Cruise Control Wiki REST APIs in the Cruise Control Wiki 14.8. Cruise Control configuration The config/cruisecontrol.properties file contains the configuration for Cruise Control. The file consists of properties in one of the following types: String Number Boolean You can specify and configure all the properties listed in the Configurations section of the Cruise Control Wiki. Capacity configuration Cruise Control uses capacity limits to determine if certain resource-based optimization goals are being broken. An attempted optimization fails if one or more of these resource-based goals is set as a hard goal and then broken. This prevents the optimization from being used to generate an optimization proposal. You specify capacity limits for Kafka broker resources in one of the following three .json files in cruise-control/config : capacityJBOD.json : For use in JBOD Kafka deployments (the default file). capacity.json : For use in non-JBOD Kafka deployments where each broker has the same number of CPU cores. capacityCores.json : For use in non-JBOD Kafka deployments where each broker has varying numbers of CPU cores. Set the file in the capacity.config.file property in cruisecontrol.properties . The selected file will be used for broker capacity resolution. For example: Capacity limits can be set for the following broker resources in the described units: DISK : Disk storage in MB CPU : CPU utilization as a percentage (0-100) or as a number of cores NW_IN : Inbound network throughput in KB per second NW_OUT : Outbound network throughput in KB per second To apply the same capacity limits to every broker monitored by Cruise Control, set capacity limits for broker ID -1 . To set different capacity limits for individual brokers, specify each broker ID and its capacity configuration. Example capacity limits configuration { "brokerCapacities":[ { "brokerId": "-1", "capacity": { "DISK": "100000", "CPU": "100", "NW_IN": "10000", "NW_OUT": "10000" }, "doc": "This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB." }, { "brokerId": "0", "capacity": { "DISK": "500000", "CPU": "100", "NW_IN": "50000", "NW_OUT": "50000" }, "doc": "This overrides the capacity for broker 0." } ] } For more information, see Populating the Capacity Configuration File in the Cruise Control Wiki. Log cleanup policy for Cruise Control Metrics topic It is important that the auto-created __CruiseControlMetrics topic (see auto-created topics ) has a log cleanup policy of DELETE rather than COMPACT . Otherwise, records that are needed by Cruise Control might be removed. As described in Section 14.3, "Deploying the Cruise Control Metrics Reporter" , setting the following options in the Kafka configuration file ensures that the COMPACT log cleanup policy is correctly set: cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1 If topic auto-creation is disabled in the Cruise Control Metrics Reporter ( cruise.control.metrics.topic.auto.create=false ), but enabled in the Kafka cluster, then the __CruiseControlMetrics topic is still automatically created by the broker. In this case, you must change the log cleanup policy of the __CruiseControlMetrics topic to DELETE using the kafka-configs.sh tool. Get the current configuration of the __CruiseControlMetrics topic: opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --describe Change the log cleanup policy in the topic configuration: /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete If topic auto-creation is disabled in both the Cruise Control Metrics Reporter and the Kafka cluster, you must create the __CruiseControlMetrics topic manually and then configure it to use the DELETE log cleanup policy using the kafka-configs.sh tool. For more information, see Section 7.9, "Modifying a topic configuration" . Logging configuration Cruise Control uses log4j1 for all server logging. To change the default configuration, edit the log4j.properties file in /opt/cruise-control/config/log4j.properties . You must restart the Cruise Control server before the changes take effect. 14.9. Generating optimization proposals When you make a POST request to the /rebalance endpoint, Cruise Control generates an optimization proposal to rebalance the Kafka cluster based on the optimization goals provided. You can use the results of the optimization proposal to rebalance your Kafka cluster. You can run the optimization proposal using one of the following endpoints: /rebalance /add_broker /remove_broker The endpoint you use depends on whether you are rebalancing across all the brokers already running in the Kafka cluster; or you want to rebalance after scaling up or before scaling down your Kafka cluster. For more information, see Rebalancing endpoints with broker scaling . The optimization proposal is generated as a dry run , unless the dryrun parameter is supplied and set to false . In "dry run mode", Cruise Control generates the optimization proposal and the estimated result, but doesn't initiate the proposal by rebalancing the cluster. You can analyze the information returned in the optimization proposal and decide whether to approve it. Use the following parameters to make requests to the endpoints: dryrun type: boolean, default: true Informs Cruise Control whether you want to generate an optimization proposal only ( true ), or generate an optimization proposal and perform a cluster rebalance ( false ). When dryrun=true (the default), you can also pass the verbose parameter to return more detailed information about the state of the Kafka cluster. This includes metrics for the load on each Kafka broker before and after the optimization proposal is applied, and the differences between the before and after values. excluded_topics type: regex A regular expression that matches the topics to exclude from the calculation of the optimization proposal. goals type: list of strings, default: the configured default.goals list List of user-provided optimization goals to use to prepare the optimization proposal. If goals are not supplied, the configured default.goals list in the cruisecontrol.properties file is used. skip_hard_goals_check type: boolean, default: false By default, Cruise Control checks that the user-provided optimization goals (in the goals parameter) contain all the configured hard goals (in hard.goals ). A request fails if you supply goals that are not a subset of the configured hard.goals . Set skip_hard_goals_check to true if you want to generate an optimization proposal with user-provided optimization goals that do not include all the configured hard.goals . json type: boolean, default: false Controls the type of response returned by the Cruise Control server. If not supplied, or set to false , then Cruise Control returns text formatted for display on the command line. If you want to extract elements of the returned information programmatically, set json=true . This will return JSON formatted text that can be piped to tools such as jq , or parsed in scripts and programs. verbose type: boolean, default: false Controls the level of detail in responses that are returned by the Cruise Control server. Can be used with dryrun=true . Note Other parameters are available. For more information, see REST APIs in the Cruise Control Wiki. Prerequisites Kafka and ZooKeeper are running Cruise Control is running (Optional for scaling up) You have installed new brokers on hosts to include in the rebalance Procedure Generate an optimization proposal using a POST request to the /rebalance , /add_broker , or /remove_broker endpoint. Example request to /rebalance using default goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance' The cached optimization proposal is immediately returned. Note If NotEnoughValidWindows is returned, Cruise Control has not yet recorded enough metrics data to generate an optimization proposal. Wait a few minutes and then resend the request. Example request to /rebalance using specified goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal' If the request satisfies the supplied goals, the cached optimization proposal is immediately returned. Otherwise, a new optimization proposal is generated using the supplied goals; this takes longer to calculate. You can enforce this behavior by adding the ignore_proposal_cache=true parameter to the request. Example request to /rebalance using specified goals without hard goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true' Example request to /add_broker that includes specified brokers curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?brokerid=3,4' The request includes the IDs of the new brokers only. For example, this request adds brokers with the IDs 3 and 4 . Replicas are moved to the new brokers from existing brokers when rebalancing. Example request to /remove_broker that excludes specified brokers curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?brokerid=3,4' The request includes the IDs of the brokers being excluded only. For example, this request excludes brokers with the IDs 3 and 4 . Replicas are moved from the brokers being removed to other existing brokers when rebalancing. Note If a broker that is being removed has excluded topics, replicas are still moved. Review the optimization proposal contained in the response. The properties describe the pending cluster rebalance operation. The proposal contains a high level summary of the proposed optimization, followed by summaries for each default optimization goal, and the expected cluster state after the proposal has executed. Pay particular attention to the following information: The Cluster load after rebalance summary. If it meets your requirements, you should assess the impact of the proposed changes using the high level summary. n inter-broker replica (y MB) moves indicates how much data will be moved across the network between brokers. The higher the value, the greater the potential performance impact on the Kafka cluster during the rebalance. n intra-broker replica (y MB) moves indicates how much data will be moved within the brokers themselves (between disks). The higher the value, the greater the potential performance impact on individual brokers (although less than that of n inter-broker replica (y MB) moves ). The number of leadership moves. This has a negligible impact on the performance of the cluster during the rebalance. Asynchronous responses The Cruise Control REST API endpoints timeout after 10 seconds by default, although proposal generation continues on the server. A timeout might occur if the most recent cached optimization proposal is not ready, or if user-provided optimization goals were specified with ignore_proposal_cache=true . To allow you to retrieve the optimization proposal at a later time, take note of the request's unique identifier, which is given in the header of responses from the /rebalance endpoint. To obtain the response using curl , specify the verbose ( -v ) option: Here is an example header: * Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2020 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001) If an optimization proposal is not ready within the timeout, you can re-submit the POST request, this time including the User-Task-ID of the original request in the header: curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance' What to do Section 14.10, "Approving an optimization proposal" 14.10. Approving an optimization proposal If you are satisfied with your most recently generated optimization proposal, you can instruct Cruise Control to initiate a cluster rebalance and begin reassigning partitions. Leave as little time as possible between generating an optimization proposal and initiating the cluster rebalance. If some time has passed since you generated the original optimization proposal, the cluster state might have changed. Therefore, the cluster rebalance that is initiated might be different to the one you reviewed. If in doubt, first generate a new optimization proposal. Only one cluster rebalance, with a status of "Active", can be in progress at a time. Prerequisites You have generated an optimization proposal from Cruise Control. Procedure Send a POST request to the /rebalance , /add_broker , or /remove_broker endpoint with the dryrun=false parameter: If you used the /add_broker or /remove_broker endpoint to generate a proposal that included or excluded brokers, use the same endpoint to perform the rebalance with or without the specified brokers. Example request to /rebalance curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false' Example request to /add_broker curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?dryrun=false&brokerid=3,4' Example request to /remove_broker curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?dryrun=false&brokerid=3,4' Cruise Control initiates the cluster rebalance and returns the optimization proposal. Check the changes that are summarized in the optimization proposal. If the changes are not what you expect, you can stop the rebalance . Check the progress of the cluster rebalance using the /user_tasks endpoint. The cluster rebalance in progress has a status of "Active". To view all cluster rebalance tasks executed on the Cruise Control server: curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC To view the status of a particular cluster rebalance task, supply the user-task-ids parameter and the task ID: (Optional) Removing brokers when scaling down After a successful rebalance you can stop any brokers you excluded in order to scale down the Kafka cluster. Check that each broker being removed does not have any live partitions in its log ( log.dirs ). ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-deleteUSD' If a log directory does not match the regular expression \.[a-z0-9]-deleteUSD , active partitions are still present. If you have active partitions, check the rebalance has finished or the configuration for the optimization proposal. You can run the proposal again. Make sure that there are no active partitions before moving on to the step. Stop the broker. su - kafka /opt/kafka/bin/kafka-server-stop.sh Confirm that the broker has stopped. jcmd | grep kafka 14.11. Stopping an active cluster rebalance You can stop the cluster rebalance that is currently in progress. This instructs Cruise Control to finish the current batch of partition reassignments and then stop the rebalance. When the rebalance has stopped, completed partition reassignments have already been applied; therefore, the state of the Kafka cluster is different when compared to before the start of the rebalance operation. If further rebalancing is required, you should generate a new optimization proposal. Note The performance of the Kafka cluster in the intermediate (stopped) state might be worse than in the initial state. Prerequisites A cluster rebalance is in progress (indicated by a status of "Active"). Procedure Send a POST request to the /stop_proposal_execution endpoint: Additional resources Generating optimization proposals
[ "sudo mkdir /opt/cruise-control", "unzip amq-streams-<version>-cruise-control-bin.zip -d /opt/cruise-control", "sudo chown -R kafka:kafka /opt/cruise-control", "/opt/kafka/bin/kafka-server-stop.sh", "metric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter", "cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1", "/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "The Kafka cluster to control. bootstrap.servers=localhost:9092 1 The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 The list of supported goals goals={list of main optimization goals} 5 The list of supported hard goals hard.goals={List of hard goals} 6 How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8", "cd /opt/cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number>", "curl 'http://HOST:PORT/kafkacruisecontrol/state'", "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal", "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal", "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "Optimization has 55 inter-broker replica (12 MB) moves, 0 intra-broker replica (0 MB) moves and 24 leadership moves with a cluster model of 5 recent windows and 100.000% of the partitions covered. Excluded Topics: []. Excluded Brokers For Leadership: []. Excluded Brokers For Replica Move: []. Counts: 3 brokers 343 replicas 7 topics. On-demand Balancedness Score Before (78.012) After (82.912). Provision Status: RIGHT_SIZED.", "capacity.config.file=config/capacityJBOD.json", "{ \"brokerCapacities\":[ { \"brokerId\": \"-1\", \"capacity\": { \"DISK\": \"100000\", \"CPU\": \"100\", \"NW_IN\": \"10000\", \"NW_OUT\": \"10000\" }, \"doc\": \"This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB.\" }, { \"brokerId\": \"0\", \"capacity\": { \"DISK\": \"500000\", \"CPU\": \"100\", \"NW_IN\": \"50000\", \"NW_OUT\": \"50000\" }, \"doc\": \"This overrides the capacity for broker 0.\" } ] }", "opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --describe", "/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?brokerid=3,4'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?brokerid=3,4'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "* Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2020 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001)", "curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?dryrun=false&brokerid=3,4'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?dryrun=false&brokerid=3,4'", "curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC", "curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks?user_task_ids=c459316f-9eb5-482f-9d2d-97b5a4cd294d'", "ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'", "su - kafka /opt/kafka/bin/kafka-server-stop.sh", "jcmd | grep kafka", "curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/stop_proposal_execution'" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/cruise-control-concepts-str
Interactively installing RHEL from installation media
Interactively installing RHEL from installation media Red Hat Enterprise Linux 8 Installing RHEL on a local system using the graphical installer Red Hat Customer Content Services
[ "dmesg|tail", "su -", "dmesg|tail [288954.686557] usb 2-1.8: New USB device strings: Mfr=0, Product=1, SerialNumber=2 [288954.686559] usb 2-1.8: Product: USB Storage [288954.686562] usb 2-1.8: SerialNumber: 000000009225 [288954.712590] usb-storage 2-1.8:1.0: USB Mass Storage device detected [288954.712687] scsi host6: usb-storage 2-1.8:1.0 [288954.712809] usbcore: registered new interface driver usb-storage [288954.716682] usbcore: registered new interface driver uas [288955.717140] scsi 6:0:0:0: Direct-Access Generic STORAGE DEVICE 9228 PQ: 0 ANSI: 0 [288955.717745] sd 6:0:0:0: Attached scsi generic sg4 type 0 [288961.876382] sd 6:0:0:0: sdd Attached SCSI removable disk", "dd if=/image_directory/image.iso of=/dev/device", "dd if=/home/testuser/Downloads/rhel-8-x86_64-boot.iso of=/dev/sdd", "diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.3 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 400.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Apple_CoreStorage 98.8 GB disk0s4 5: Apple_Boot Recovery HD 650.0 MB disk0s5 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS YosemiteHD *399.6 GB disk1 Logical Volume on disk0s1 8A142795-8036-48DF-9FC5-84506DFBB7B2 Unlocked Encrypted /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *8.1 GB disk2 1: Windows_NTFS SanDisk USB 8.1 GB disk2s1", "diskutil unmountDisk /dev/disknumber Unmount of all volumes on disknumber was successful", "sudo dd if= /Users/user_name/Downloads/rhel-8-x86_64-boot.iso of= /dev/rdisk2 bs= 512K status= progress", "mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer", "mokutil --reset", "virt-install --name=<guest_name> --disk size=<disksize_in_GB> --memory=<memory_size_in_MB> --cdrom <filepath_to_iso> --graphics vnc", ">vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=RHEL-9-5-0-BaseOS-x86_64 rd.live.check quiet fips=1", "linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-4-0-BaseOS-x86_64 rd.live. check quiet fips=1", "modprobe.blacklist=ahci", "oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "subscription-manager register --activationkey= <activation_key_name> --org= <organization_ID>", "The system has been registered with id: 62edc0f8-855b-4184-b1b8-72a9dc793b96", "subscription-manager syspurpose role --set \"VALUE\"", "subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"", "subscription-manager syspurpose role --list", "subscription-manager syspurpose role --unset", "subscription-manager syspurpose service-level --set \"VALUE\"", "subscription-manager syspurpose service-level --set \"Standard\"", "subscription-manager syspurpose service-level --list", "subscription-manager syspurpose service-level --unset", "subscription-manager syspurpose usage --set \"VALUE\"", "subscription-manager syspurpose usage --set \"Production\"", "subscription-manager syspurpose usage --list", "subscription-manager syspurpose usage --unset", "subscription-manager syspurpose --show", "man subscription-manager", "subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched", "subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled", "subscription-manager unregister", "vmlinuz ... inst.debug", "cd /tmp/pre-anaconda-logs/", "dmesg", "[ 170.171135] sd 5:0:0:0: [sdb] Attached SCSI removable disk", "mkdir usb", "mount /dev/sdb1 /mnt/usb", "cd /mnt/usb", "ls", "cp /tmp/*log /mnt/usb", "umount /mnt/usb", "cd /tmp", "scp *log user@address:path", "scp *log [email protected]:/home/john/logs/", "The authenticity of host '192.168.0.122 (192.168.0.122)' can't be established. ECDSA key fingerprint is a4:60:76:eb:b2:d0:aa:23:af:3d:59:5c:de:bb:c4:42. Are you sure you want to continue connecting (yes/no)?", "curl --output directory-path/filename.iso 'new_copied_link_location' --continue-at -", "sha256sum rhel-x.x-x86_64-dvd.iso `85a...46c rhel-x.x-x86_64-dvd.iso`", "curl --output _rhel-x.x-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-x.x-x86_64-dvd.iso?_auth =141...963' --continue-at -", "grubby --default-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64", "grubby --remove-args=\"rhgb\" --update-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 396M 0 396M 0% /dev tmpfs 411M 0 411M 0% /dev/shm tmpfs 411M 6.7M 405M 2% /run tmpfs 411M 0 411M 0% /sys/fs/cgroup /dev/mapper/rhel-root 17G 4.1G 13G 25% / /dev/sda1 1014M 173M 842M 17% /boot tmpfs 83M 20K 83M 1% /run/user/42 tmpfs 83M 84K 83M 1% /run/user/1000 /dev/dm-4 90G 90G 0 100% /home", "free -m", "mem= xx M", "free -m", "grubby --update-kernel=ALL --args=\"mem= xx M\"", "Enable=true", "systemctl restart gdm.service", "X :1 -query address", "Xnest :1 -query address", "inst.rescue inst.dd=driver_name", "inst.rescue modprobe.blacklist=driver_name", "The rescue environment will now attempt to find your Linux installation and mount it under the directory: /mnt/sysroot/. You can then make any changes required to your system. Choose 1 to proceed with this step. You can choose to mount your file systems read-only instead of read-write by choosing 2 . If for some reason this process does not work choose 3 to skip directly to a shell. 1) Continue 2) Read-only mount 3) Skip to shell 4) Quit (Reboot)", "sh-4.2#", "sh-4.2# chroot /mnt/sysroot", "sh-4.2# mount -t xfs /dev/mapper/VolGroup00-LogVol02 /directory", "sh-4.2# fdisk -l", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# sosreport", "bash-4.2# ip addr add 10.13.153.64/23 dev eth0", "sh-4.2# exit", "sh-4.2# cp /mnt/sysroot/var/tmp/sosreport new_location", "sh-4.2# scp /mnt/sysroot/var/tmp/sosreport username@hostname:sosreport", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# /sbin/grub2-install install_device", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# yum install /root/drivers/xorg-x11-drv-wacom-0.23.0-6.el7.x86_64.rpm", "sh-4.2# exit", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# yum remove xorg-x11-drv-wacom", "sh-4.2# exit", "ip=192.168.1.15 netmask=255.255.255.0 gateway=192.168.1.254 nameserver=192.168.1.250 hostname=myhost1", "ip=192.168.1.15::192.168.1.254:255.255.255.0:myhost1::none: nameserver=192.168.1.250", "inst.xtimeout= N", "[ ...] rootfs image is not initramfs", "sha256sum dvd/images/pxeboot/initrd.img fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 dvd/images/pxeboot/initrd.img", "grep sha256 dvd/.treeinfo images/efiboot.img = sha256: d357d5063b96226d643c41c9025529554a422acb43a4394e4ebcaa779cc7a917 images/install.img = sha256: 8c0323572f7fc04e34dd81c97d008a2ddfc2cfc525aef8c31459e21bf3397514 images/pxeboot/initrd.img = sha256: fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 images/pxeboot/vmlinuz = sha256: b9510ea4212220e85351cbb7f2ebc2b1b0804a6d40ccb93307c165e16d1095db", "[ ...] No filesystem could mount root, tried: [ ...] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) [ ...] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-55.el9.s390x #1 [ ...] [ ...] Call Trace: [ ...] ([<...>] show_trace+0x.../0x...) [ ...] [<...>] show_stack+0x.../0x [ ...] [<...>] panic+0x.../0x [ ...] [<...>] mount_block_root+0x.../0x [ ...] [<...>] prepare_namespace+0x.../0x [ ...] [<...>] kernel_init_freeable+0x.../0x [ ...] [<...>] kernel_init+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x...", "inst.stage2=https://hostname/path_to_install_image/ inst.noverifyssl", "inst.repo=https://hostname/path_to_install_repository/ inst.noverifyssl", "inst.stage2.all inst.stage2=http://hostname1/path_to_install_tree/ inst.stage2=http://hostname2/path_to_install_tree/ inst.stage2=http://hostname3/path_to_install_tree/", "[PROTOCOL://][USERNAME[:PASSWORD]@]HOST[:PORT]", "inst.nosave=Input_ks,logs", "ifname=eth0:01:23:45:67:89:ab", "vlan=vlan5:enp0s1", "bond=bond0:enp0s1,enp0s2:mode=active-backup,tx_queues=32,downdelay=5000", "team=team0:enp0s1,enp0s2", "bridge=bridge0:enp0s1,enp0s2", "modprobe.blacklist=ahci,firewire_ohci", "modprobe.blacklist=virtio_blk" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/interactively_installing_rhel_from_installation_media/index
5.4. Moving swap File Systems from a Single Path Device to a Multipath Device
5.4. Moving swap File Systems from a Single Path Device to a Multipath Device By default, swap devices are set up as logical volumes. This does not require any special procedure for configuring them as multipath devices as long as you set up multipathing on the physical volumes that constitute the logical volume group. If your swap device is not an LVM volume, however, and it is mounted by device name, you may need to edit the /etc/fstab file to switch to the appropriate multipath device name. Determine the WWID number of the swap device by running the /sbin/multipath command with the -v3 option. The output from the command should show the swap device in the paths list. You should look in the command output for a line of the following format, showing the swap device: For example, if your swap file system is set up on sda or one of its partitions, you would see a line in the output such as the following: Set up an alias for the swap device in the /etc/multipath.conf file: Edit the /etc/fstab file and replace the old device path to the root device with the multipath device. For example, if you had the following entry in the /etc/fstab file: You would change the entry to the following:
[ "WWID H:B:T:L devname MAJOR : MINOR", "===== paths list ===== 1ATA WDC WD800JD-75MSA3 WD-WMAM9F 1:0:0:0 sda 8:0", "multipaths { multipath { wwid WWID_of_swap_device alias swapdev } }", "/dev/sda2 swap swap defaults 0 0", "/dev/mapper/swapdev swap swap defaults 0 0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/move_swap_to_multipath
1.6. Updating a Red Hat Enterprise Linux High Availability Cluster
1.6. Updating a Red Hat Enterprise Linux High Availability Cluster Updating packages that make up the RHEL High Availability and Resilient Storage Add-Ons, either individually or as a whole, can be done in one of two general ways: Rolling Updates : Remove one node at a time from service, update its software, then integrate it back into the cluster. This allows the cluster to continue providing service and managing resources while each node is updated. Entire Cluster Update : Stop the entire cluster, apply updates to all nodes, then start the cluster back up. Warning It is critical that when performing software update procedures for Red Hat Enterprise LInux High Availability and Resilient Storage clusters, you ensure that any node that will undergo updates is not an active member of the cluster before those updates are initiated. For a full description of each of these methods and the procedures to follow for the updates, see Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-upgradeconsider-haar
8.3. Installing in Text Mode
8.3. Installing in Text Mode Text mode installation offers an interactive, non-graphical interface for installing Red Hat Enterprise Linux. This can be useful on systems with no graphical capabilities; however, you should always consider the available alternatives before starting a text-based installation. Text mode is limited in the amount of choices you can make during the installation. Important Red Hat recommends that you install Red Hat Enterprise Linux using the graphical interface. If you are installing Red Hat Enterprise Linux on a system that lacks a graphical display, consider performing the installation over a VNC connection - see Chapter 25, Using VNC . The text mode installation program will prompt you to confirm the use of text mode if it detects that a VNC-based installation is possible. If your system has a graphical display, but graphical installation fails, try booting with the inst.xdriver=vesa option - see Chapter 23, Boot Options . Alternatively, consider a Kickstart installation. See Chapter 27, Kickstart Installations for more information. Figure 8.1. Text Mode Installation Installation in text mode follows a pattern similar to the graphical installation: There is no single fixed progression; you can configure many settings in any order you want using the main status screen. Screens which have already been configured, either automatically or by you, are marked as [x] , and screens which require your attention before the installation can begin are marked with [!] . Available commands are displayed below the list of available options. Note When related background tasks are being run, certain menu items can be temporarily unavailable or display the Processing... label. To refresh to the current status of text menu items, use the r option at the text mode prompt. At the bottom of the screen in text mode, a green bar is displayed showing five menu options. These options represent different screens in the tmux terminal multiplexer; by default you start in screen 1, and you can use keyboard shortcuts to switch to other screens which contain logs and an interactive command prompt. For information about available screens and shortcuts to switch to them, see Section 8.2.1, "Accessing Consoles" . Limits of interactive text mode installation include: The installer will always use the English language and the US English keyboard layout. You can configure your language and keyboard settings, but these settings will only apply to the installed system, not to the installation. You cannot configure any advanced storage methods (LVM, software RAID, FCoE, zFCP and iSCSI). It is not possible to configure custom partitioning; you must use one of the automatic partitioning settings. You also cannot configure where the boot loader will be installed. You cannot select any package add-ons to be installed; they must be added after the installation finishes using the Yum package manager. To start a text mode installation, boot the installation with the inst.text boot option used either at the boot command line in the boot menu, or in your PXE server configuration. See Chapter 7, Booting the Installation on 64-bit AMD, Intel, and ARM systems for information about booting and using boot options.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-text-mode-x86
Chapter 2. Generating and maintaining the diagnostic reports using the RHEL web console
Chapter 2. Generating and maintaining the diagnostic reports using the RHEL web console Generate, download, and delete the diagnostic reports in the RHEL web console. 2.1. Generating diagnostic reports using the RHEL web console Prerequisites The RHEL web console has been installed. For details, see Installing the web console . The cockpit-storaged package is installed on your system. You have administrator privileges. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . In the left side menu, select Tools >> Diagnostic reports . To generate a new diagnostic report, click the Run report button. Enter the label for the report you want to create. (Optional) Customize your report. Enter the encryption passphrase to encrypt your report. If you want to skip the encryption of the report, leave the field empty. Check the checkbox Obfuscate network addresses, hostnames, and usernames to obfuscate certain data. Check the checkbox Use verbose logging to increase logging verbosity. Click the Run report button to generate a report and wait for the process to complete. You can stop generating the report using the Stop report button. 2.2. Downloading diagnostic reports using the RHEL web console Prerequisites The RHEL web console has been installed. For details, see Installing the web console . You have administrator privileges. One or more diagnostic reports have been generated. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . In the left side menu, select Tools >> Diagnostic reports . Click the Download button to the report that you want to download. The download will start automatically. steps For the methods on how to provide Red Hat Technical Support team with your diagnostic report, see Methods for providing an sos report to Red Hat technical support . 2.3. Deleting diagnostic reports using the RHEL web console Prerequisites The RHEL web console has been installed. For details, see Installing the web console . You have administrator privileges. One or more diagnostic reports have been generated. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . In the left side menu, select Tools >> Diagnostic reports . Click the vertical ellipsis by the Download button to the report that you want to delete, then click on the Delete button. In the Delete report permanently? window, click the Delete button to delete the report.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/generating_sos_reports_for_technical_support/generating-and-maintaining-the-diagnostic-reports-using-the-rhel-web-console_generating-sos-reports-for-technical-support
Chapter 13. Editing applications
Chapter 13. Editing applications You can edit the configuration and the source code of the application you create using the Topology view. 13.1. Prerequisites You have the appropriate roles and permissions in a project to create and modify applications in OpenShift Container Platform. You have created and deployed an application on OpenShift Container Platform using the Developer perspective . You have logged in to the web console and have switched to the Developer perspective . 13.2. Editing the source code of an application using the Developer perspective You can use the Topology view in the Developer perspective to edit the source code of your application. Procedure In the Topology view, click the Edit Source code icon, displayed at the bottom-right of the deployed application, to access your source code and modify it. Note This feature is available only when you create applications using the From Git , From Catalog , and the From Dockerfile options. If the Eclipse Che Operator is installed in your cluster, a Che workspace ( ) is created and you are directed to the workspace to edit your source code. If it is not installed, you will be directed to the Git repository ( ) your source code is hosted in. 13.3. Editing the application configuration using the Developer perspective You can use the Topology view in the Developer perspective to edit the configuration of your application. Note Currently, only configurations of applications created by using the From Git , Container Image , From Catalog , or From Dockerfile options in the Add workflow of the Developer perspective can be edited. Configurations of applications created by using the CLI or the YAML option from the Add workflow cannot be edited. Prerequisites Ensure that you have created an application using the From Git , Container Image , From Catalog , or From Dockerfile options in the Add workflow. Procedure After you have created an application and it is displayed in the Topology view, right-click the application to see the edit options available. Figure 13.1. Edit application Click Edit application-name to see the Add workflow you used to create the application. The form is pre-populated with the values you had added while creating the application. Edit the necessary values for the application. Note You cannot edit the Name field in the General section, the CI/CD pipelines, or the Create a route to the application field in the Advanced Options section. Click Save to restart the build and deploy a new image. Figure 13.2. Edit and redeploy application
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/odc-editing-applications
22.2. Persistent Module Loading
22.2. Persistent Module Loading Kernel modules are usually loaded directly by the facility that requires them, which is given correct settings in the /etc/modprobe.conf file. However, it is sometimes necessary to explicitly force the loading of a module at boot time. Red Hat Enterprise Linux checks for the existence of the /etc/rc.modules file at boot time, which contains various commands to load modules. The rc.modules should be used, and not rc.local because rc.modules is executed earlier in the boot process. For example, the following commands configure loading of the foo module at boot time (as root): Note This approach is not necessary for network and SCSI interfaces because they have their own specific mechanisms.
[ "echo modprobe foo >> /etc/rc.modules chmod +x /etc/rc.modules" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-kernel-modules-persistant
Chapter 4. Advisories related to this release
Chapter 4. Advisories related to this release The following advisories have been issued to document bugfixes and CVE fixes included in this release: RHSA-2022:7002 RHSA-2022:7003 RHSA-2022:7004 RHSA-2022:7005 RHSA-2022:7006 RHSA-2022:7007 RHSA-2022:7049 RHSA-2022:7050 Revised on 2024-05-10 09:05:50 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.352/openjdk8-352-advisory_openjdk
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_application_and_vnf_policy_guide/con-conscious-language-message
14.5.20. Domain Retrieval Commands
14.5.20. Domain Retrieval Commands The following commands will display different information about a given domain virsh domhostname domain displays the host name of the specified domain provided the hypervisor can publish it. virsh dominfo domain displays basic information about a specified domain. virsh domuid domain|ID converts a given domain name or ID into a UUID. virsh domid domain|ID converts a given domain name or UUID into an ID. virsh domjobabort domain aborts the currently running job on the specified domain. virsh domjobinfo domain displays information about jobs running on the specified domain, including migration statistics virsh domname domain ID|UUID converts a given domain ID or UUID into a domain name. virsh domstate domain displays the state of the given domain. Using the --reason option will also display the reason for the displayed state. virsh domcontrol domain displays the state of an interface to VMM that were used to control a domain. For states that are not OK or Error, it will also print the number of seconds that have elapsed since the control interface entered the displayed state. Example 14.2. Example of statistical feedback In order to get information about the domain, run the following command:
[ "virsh domjobinfo rhel6 Job type: Unbounded Time elapsed: 1603 ms Data processed: 47.004 MiB Data remaining: 658.633 MiB Data total: 1.125 GiB Memory processed: 47.004 MiB Memory remaining: 658.633 MiB Memory total: 1.125 GiB Constant pages: 114382 Normal pages: 12005 Normal data: 46.895 MiB Expected downtime: 0 ms Compression cache: 64.000 MiB Compressed data: 0.000 B Compressed pages: 0 Compression cache misses: 12005 Compression overflows: 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-domain_retrieval_commands
Chapter 5. October 2024
Chapter 5. October 2024 5.1. Product-wide updates 5.1.1. Errata subscription services for Red Hat Insights have moved Red Hat is consolidating and enhancing system and subscription management capabilities. In Q4 of 2024, between October 28 and December 20, core subscription services for Red Hat products and services are moving from the Customer Portal to the notifications service on the Hybrid Cloud Console . This change aims to enhance the Red Hat support experience and provides the following new capabilities: Richer permissions through RBAC tools Simplification of client registration tools Better alignment with cloud-native management tools On October 25th 2024, Red Hat Insights for Red Hat Enterprise Linux moved Errata subscription services from the existing Errata system ( access.redhat.com ) to the Hybrid Cloud Console ( console.redhat.com ), including the following notification types: System-level errata notifications ("send me an email if my registered system is affected by errata") Subscription-level errata notifications ("send me an email if any of my subscribed products are affected by errata") Your existing errata subscription preferences were automatically migrated over to the new notification service, regardless of how you deployed or registered them. Unless you applied custom filtering to your subscription preferences, you do not need to take any actions to continue receiving errata notifications. 5.1.1.1. Changes to system-level errata notifications System-level errata notifications, delivered as Patch notifications on console.redhat.com, now include all systems that are connected to Red Hat through Red Hat Subscription Manager (RHSM), Satellite, and Red Hat Update Infrastructure (RHUI). Before the change, only systems connected to Red Hat through Red Hat Subscription Management were included. 5.1.1.2. Changes to subscription-level errata notifications Subscription-level errata notifications are now batched and sent daily by errata type, for example, Security , Bug Fix , or Enhancement . You can also now choose how you want to receive notifications. You can integrate these types of notifications with 3rd-party applications such as Event-driven Ansible, webhooks, Slack, and Microsoft Teams. 5.1.1.3. Changes to email notifications You will also see the following changes to errata email notifications for Red Hat Insights: The sender has changed from [email protected] to [email protected] The format now includes a list of errata with links to where you can find more information instead of the full text The frequency of emails aligns with the notification settings on the Hybrid Cloud Console For more information, see Transition of Red Hat's subscription services to the Red Hat Hybrid Cloud Console (console.redhat.com) . 5.1.2. Reminder: Upcoming End of Life for Basic HTTP Authentication mechanism Red Hat is implementing a crucial security enhancement for cloud service APIs on console.redhat.com. Effective December 31, 2024, Red Hat is ending support for Basic HTTP Authentication. Therefore, Basic authentication will no longer be supported as an option for connecting a host with Red Hat Insights through the Insights client (insights-client) or the Hybrid Cloud Console APIs. For the Insights client: Basic authentication is not the default authentication mechanism, but it has been available as a manually configured option for a select set of workflows. Red Hat recommends that you modify host systems that use Basic authentication to use certificate authentication instead. Otherwise, systems that continue to use Basic authentication will not be able to connect to Red Hat Insights from January 1, 2025. For more information, see the Red Hat Knowledgebase article How to switch from Basic Auth to Certificate Authentication for Red Hat Insights and the Life Cycle & Update Policies page for Red Hat Insights . For the Hybrid Cloud Console APIs: To support the change from Basic authentication to token-based authentication, service accounts will be integrated with the User Access feature. User Access is an implementation of role-based access control (RBAC) in the Red Hat Hybrid Cloud Console. This change provides you with more granular control over access permissions to services hosted on the Hybrid Cloud Console and also enhances security in the change to token-based authentication. For more information, see the Red Hat Knowledgebase article Transition of Red Hat Hybrid Cloud Console APIs from Basic authentication to token-based authentication via service accounts . 5.1.3. Published blogs and resources Blog: A smarter way to manage malware with Red Hat Insights by Chris Henderson (October 1, 2024) Blog: Red Hat Insights provides analytics for the IBM X-Force Cloud Threat Report by McKibbin Brady (October 3, 2024) Blog: How incident detection simplifies OpenShift observability by Ivan Necas (October 3, 2024) Blog: Craft and deploy custom RHEL images for the cloud by Amir Fefer (October 3, 2024) Article: Onboarding for Red Hat Insights with FedRAMP(R) (October 31, 2024) 5.2. Red Hat Insights for Red Hat Enterprise Linux 5.2.1. Advisor New recommendations The Red Hat Insights for Red Hat Enterprise Linux advisor service now identifies additional problems and provides you with recommendations in the Hybrid Cloud Console for mitigating critical issues. In October, the following recommendations were added: Misconfiguration of Insights client impacts recommendations Leapp upgrade failure Apache httpd service doesn't start Kernel panic on an edge computing system Japanese localization issue with host-metering.service Some Red Hat Insights console features become unavailable when the rhc client disconnects kdump fails to generate vmcore for some Intel CPU systems VMware guest performance issue with Intel Nehalem CPU 5.2.2. Inventory 5.2.2.1. Enhancements and bug fixes in the inventory UI The Red Hat Insights inventory UI has been enhanced to give you a better and more consistent experience and several bugs have also been fixed. Enhancements When you open the main Red Hat Insights inventory page in the Hybrid Cloud Console, you will now see a new help tooltip, which provides a quick overview and links to relevant product documentation, making it easier for you to find and understand what you need. Bug fixes The following known issues in the inventory UI have also been fixed: Display issues related to system tagging Filtering enhancements for consistency Whitespace is handled more effectively in hostname fields 5.2.2.2. Automating Discovery report uploads by using Ansible (Developer Preview) Using a new experimental feature together with Ansible, you can now automate the upload of Discovery reports to the Red Hat Insights inventory component, saving you time and simplifying the process. Before this enhancement, you could only upload Discovery reports manually by using dsc , the Red Hat Discovery command-line interface, and the procedure outlined in the Sending reports to the Hybrid Cloud Console chapter of the Using Discovery guide . For detailed instructions and a demo to help you get started, see " Red Hat Discovery - Ansible Playbook for Automated Upload to Red Hat Insights Inventory " in the insights-discovery GitHub repository . This feature is still in the experimental stage, and your feedback is crucial in shaping future improvements. We hope this will make managing Discovery reports more efficient and effortless for all of you. Important The Discovery report upload automation feature is available as Developer Preview software. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. We welcome your feedback, which you can provide by creating an issue in the insights-discovery GitHub repository. For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope . 5.3. Insights for OpenShift Container Platform 5.3.1. Advisor Incident Detection (Developer Preview) In October, Red Hat Insights for OpenShift introduced new capabilities in incident detection. Incident Detection is a new feature that uses analytics to group alerts into incidents and help you quickly and easily understand what the underlying issue might be and how to mitigate it. Important Incident Detection feature is available and supported by Red Hat in Developer Preview mode. For more information about how to set up and use Incident Detection in Red Hat Insights for OpenShift, see the additional resources. Additional resources Blog: How incident detection simplifies OpenShift observability Demo: Red Hat OpenShift Container Platform Incident Detection
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/release_notes/october-2024
8.197. redhat-support-lib-python
8.197. redhat-support-lib-python 8.197.1. RHBA-2014:1590 - redhat-support-lib-python and redhat-support-tool update Updated redhat-support-lib-python and redhat-support-tool packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The redhat-support-lib-python package provides a Python library that developers can use to easily write software solutions that leverage Red Hat Access subscription services. The redhat-support-tool utility facilitates console-based access to Red Hat's subscriber services and gives Red Hat subscribers more venues for accessing the content and services available to them as Red Hat customers. Further, it enables Red Hat customers to integrate and automate their helpdesk services with our subscription services. This update fixes the following bugs: Bug Fixes BZ# 1054445 When the debuginfo packages had not been already installed, the "btextract ./vmcore" command did not work. As a consequence, the redhat-support-tool utility ran but did not install the debuginfo packages. The non-interactive mode of btextract has been fixed to download kernel debug symbols when they are needed, and redhat-support-tool now installs the debuginfo packages as intended. BZ# 1036921 When the "redhat-support-tool getcase [case-number]" command was issued, the "Version" field was not displayed in the Case Details section. A patch has been provided to fix this bug, and the product version now shows when viewing case details. BZ# 1060916 Prior to this update, the redhat-support-tool diagnose feature did not work on simple oops messages or RIP strings from kernel crashes. In addition, redhat-support-tool results differed from the results returned by the respective API. With this update, diagnostics for simple oops messages and RIP strings from kernel crashes have been improved. As a result, the redhat-support-tool diagnose [oops.txt] command points at the same article as the API "Diagnose" button does, and a simple RIP.txt file pulls up the same articles as putting the RIP on the sfdc search bar. BZ# 1036711 Due to poor logging in the kernel download code, non-root users were not informed about lacking the necessary root privileges to download kernel debug symbols. To fix this bug, logging which explains that root privileges are required to execute findkerneldebugs and getkerneldebug commands has been added. In addition, the help for these two commands has been expanded to indicate that root privileges are required. Now, the non-root user has a better indication of which commands require root permissions. The redhat-support-tool utility facilitates console-based access to Red Hat's subscriber services and gives Red Hat subscribers more venues for accessing the content and services available to them as Red Hat customers. Further, it enables Red Hat customers to integrate and automate their helpdesk services with our subscription services. In addition, this update adds the following Enhancement BZ# 1036699 Issues such as low disk space or connectivity problems can cause the downloading of kernel debug symbols from Red Hat Network to fail. Nevertheless, the user was not informed properly about the cause of this failure. With this update, error messages are returned explaining why the kernel debug symbols failed to download. Users of redhat-support-lib-python and redhat-support-tool are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/redhat-support-lib-python
Red Hat build of MicroShift release notes
Red Hat build of MicroShift release notes Red Hat build of MicroShift 4.18 Highlights of what is new and what has changed with this MicroShift release Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/red_hat_build_of_microshift_release_notes/index
4.32. coreutils
4.32. coreutils 4.32.1. RHBA-2011:1693 - coreutils bug fix update Updated coreutils packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The coreutils packages contain the core GNU utilities. These packages combine the old GNU fileutils, sh-utils, and textutils packages. Bug Fixes BZ# 691292 Prior to this update, SELinux appeared to be disabled when building coreutils in Mock. As a result, coreutils did not build. With this update, SELinux determines more precisely whether it is disabled or not. Now, the packages are built successfully. BZ# 703712 Previously, incorrect signal handling could cause various problems for tcsh users logging into the root shell using the su utility. Signal masking in the subshell called by the su utility has been modified to respect the SIGTSTP signal as well as the SIGSTOP signal. BZ# 715557 When using the "-Z/--context" option in the cp utility, the SELinux context of a file was not changed if the file destination already existed. The utility has been modified and the context is changed as expected. However, this option is not portable to other systems. BZ# 720325 Prior to this update, the acl_extended_file() function could cause unnecessary mounts of autofs when using the ls command on a directory with autofs mounted. This update adds the new acl function, acl_extended_file_nofollow(), to prevent unnecessary autofs mounts. BZ# 725618 The description of the "--sleep-interval" option in the tail(1) manual page has been improved to be clearer about the behavior and to match the upstream version of coreutils. All users of coreutils are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/coreutils
Chapter 5. Installing an IdM server: With integrated DNS, without a CA
Chapter 5. Installing an IdM server: With integrated DNS, without a CA Installing a new Identity Management (IdM) server with integrated DNS has the following advantages: You can automate much of the maintenance and DNS record management using native IdM tools. For example, DNS SRV records are automatically created during the setup, and later on are automatically updated. You can configure global forwarders during the installation of the IdM server for a stable external internet connection. Global forwarders are also useful for trusts with Active Directory. You can set up a DNS reverse zone to prevent emails from your domain to be considered spam by email servers outside of the IdM domain. Installing IdM with integrated DNS has certain limitations: IdM DNS is not meant to be used as a general-purpose DNS server. Some of the advanced DNS functions are not supported. For more information, see DNS services available in an IdM server . This chapter describes how you can install a new IdM server without a certificate authority (CA). 5.1. Certificates required to install an IdM server without a CA You need to provide the certificates required to install an Identity Management (IdM) server without a certificate authority (CA). By using the command-line options described, you can provide these certificates to the ipa-server-install utility. Important You cannot install a server or replica using self-signed third-party server certificates because the imported certificate files must contain the full CA certificate chain of the CA that issued the LDAP and Apache server certificates. The LDAP server certificate and private key --dirsrv-cert-file for the certificate and private key files for the LDAP server certificate --dirsrv-pin for the password to access the private key in the files specified in --dirsrv-cert-file The Apache server certificate and private key --http-cert-file for the certificate and private key files for the Apache server certificate --http-pin for the password to access the private key in the files specified in --http-cert-file The full CA certificate chain of the CA that issued the LDAP and Apache server certificates --dirsrv-cert-file and --http-cert-file for the certificate files with the full CA certificate chain or a part of it You can provide the files specified in the --dirsrv-cert-file and --http-cert-file options in the following formats: Privacy-Enhanced Mail (PEM) encoded certificate (RFC 7468). Note that the Identity Management installer accepts concatenated PEM-encoded objects. Distinguished Encoding Rules (DER) PKCS #7 certificate chain objects PKCS #8 private key objects PKCS #12 archives You can specify the --dirsrv-cert-file and --http-cert-file options multiple times to specify multiple files. The certificate files to complete the full CA certificate chain (not needed in some environments) --ca-cert-file for the file or files containing the CA certificate of the CA that issued the LDAP, Apache Server, and Kerberos KDC certificates. Use this option if the CA certificate is not present in the certificate files provided by the other options. The files provided using --dirsrv-cert-file and --http-cert-file combined with the file provided using --ca-cert-file must contain the full CA certificate chain of the CA that issued the LDAP and Apache server certificates. The Kerberos key distribution center (KDC) PKINIT certificate and private key If you have a PKINIT certificate, use the following 2 options: --pkinit-cert-file for the Kerberos KDC SSL certificate and private key --pkinit-pin for the password to access the Kerberos KDC private key in the files specified in --pkinit-cert-file If you do not have a PKINIT certificate and want to configure the IdM server with a local KDC with a self-signed certificate, use the following option: --no-pkinit for disabling pkinit setup steps Additional resources For details on what the certificate file formats these options accept, see the ipa-server-install (1) man page. RHEL IdM PKINIT KDC certificate and extensions (Red Hat Knowledgebase) 5.2. Interactive installation During the interactive installation using the ipa-server-install utility, you are asked to supply basic configuration of the system, for example the realm, the administrator's password and the Directory Manager's password. The ipa-server-install installation script creates a log file at /var/log/ipaserver-install.log . If the installation fails, the log can help you identify the problem. Procedure Run the ipa-server-install utility and provide all the required certificates. For example: See Certificates required to install an IdM server without a CA for details on the provided certificates. The script prompts to configure an integrated DNS service. Enter yes or no . In this procedure, we are installing a server with integrated DNS. Note If you want to install a server without integrated DNS, the installation script will not prompt you for DNS configuration as described in the steps below. See Installing an IdM server: Without integrated DNS, with an integrated CA as the root CA for details on the steps for installing a server without DNS. The script prompts for several required settings and offers recommended default values in brackets. To accept a default value, press Enter . To provide a custom value, enter the required value. Warning Plan these names carefully. You will not be able to change them after the installation is complete. Enter the passwords for the Directory Server superuser ( cn=Directory Manager ) and for the Identity Management (IdM) administration system user account ( admin ). The script prompts for per-server DNS forwarders. To configure per-server DNS forwarders, enter yes , and then follow the instructions on the command line. The installation process will add the forwarder IP addresses to the IdM LDAP. For the forwarding policy default settings, see the --forward-policy description in the ipa-dns-install (1) man page. If you do not want to use DNS forwarding, enter no . With no DNS forwarders, hosts in your IdM domain will not be able to resolve names from other, internal, DNS domains in your infrastructure. The hosts will only be left with public DNS servers to resolve their DNS queries. The script prompts to check if any DNS reverse (PTR) records for the IP addresses associated with the server need to be configured. If you run the search and missing reverse zones are discovered, the script asks you whether to create the reverse zones along with the PTR records. Note Using IdM to manage reverse zones is optional. You can use an external DNS service for this purpose instead. Enter yes to confirm the server configuration. The installation script now configures the server. Wait for the operation to complete. After the installation script completes, update your DNS records in the following way: Add DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after an IdM DNS server is installed. Add an _ntp._udp service (SRV) record for your time server to your IdM DNS. The presence of the SRV record for the time server of the newly-installed IdM server in IdM DNS ensures that future replica and client installations are automatically configured to synchronize with the time server used by this primary IdM server.
[ "ipa-server-install --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --ca-cert-file ca.crt", "Do you want to configure integrated DNS (BIND)? [no]: yes", "Server host name [server.idm.example.com]: Please confirm the domain name [idm.example.com]: Please provide a realm name [IDM.EXAMPLE.COM]:", "Directory Manager password: IPA admin password:", "Do you want to configure DNS forwarders? [yes]:", "Do you want to search for missing reverse zones? [yes]:", "Do you want to create reverse zone for IP 192.0.2.1 [yes]: Please specify the reverse zone name [2.0.192.in-addr.arpa.]: Using reverse zone(s) 2.0.192.in-addr.arpa.", "Continue to configure the system with these values? [no]: yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/installing-an-ipa-server-without-a-ca_installing-identity-management
Chapter 1. Running GitOps control plane workloads on infrastructure nodes
Chapter 1. Running GitOps control plane workloads on infrastructure nodes You can use infrastructure nodes to isolate infrastructure workloads for two primary purposes: To prevent billing costs associated with the number of subscriptions To separate maintenance and management You can use the OpenShift Container Platform to run GitOps control plane workloads on infrastructure nodes. This includes the Operator pod and the control plane workloads created by the Red Hat OpenShift GitOps Operator in the openshift-gitops namespace by default, including the default Argo CD instance in this namespace. With GitOps control plane workloads, you can securely and declaratively isolate the infrastructure workloads by creating multiple isolated Argo CD instances in a cluster, with full control over what an Argo CD instance is capable of. In addition, you can manage these Argo CD instances declaratively across multiple developer namespaces. By using taints, you can ensure that only infrastructure components run on these nodes. Note All other Argo CD instances installed in user namespaces are not eligible to run on infrastructure nodes. 1.1. Moving GitOps control plane workloads to infrastructure nodes You can move the GitOps control plane workloads installed by the Red Hat OpenShift GitOps to the infrastructure nodes. The following are the control plane workloads that you can move: kam deployment cluster deployment (backend service) openshift-gitops-applicationset-controller deployment openshift-gitops-dex-server deployment openshift-gitops-redis deployment openshift-gitops-redis-ha-haproxy deployment openshift-gitops-repo-sever deployment openshift-gitops-server deployment openshift-gitops-application-controller statefulset openshift-gitops-redis-server statefulset Procedure Label existing nodes as infrastructure by running the following command: USD oc label node <node-name> node-role.kubernetes.io/infra= Edit the GitOpsService custom resource (CR) to add the infrastructure node selector: USD oc edit gitopsservice -n openshift-gitops In the GitOpsService CR file, add runOnInfra field to the spec section and set it to true . This field moves the control plane workloads in openshift-gitops namespace to the infrastructure nodes: apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true Optional: Apply taints and isolate the workloads on infrastructure nodes and prevent other workloads from scheduling on these nodes. USD oc adm taint nodes -l node-role.kubernetes.io/infra infra=reserved:NoSchedule infra=reserved:NoExecute Optional: If you apply taints to the nodes, you can add tolerations in the GitOpsService CR: spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved To verify that the workloads are scheduled on infrastructure nodes in the Red Hat OpenShift GitOps namespace, click any of the pod names and ensure that the Node selector and Tolerations have been added. Note Any manually added Node selectors and Tolerations in the default Argo CD CR will be overwritten by the toggle and the tolerations in the GitOpsService CR. 1.2. Moving the GitOps Operator pod to infrastructure nodes You can move the GitOps Operator pod to the infrastructure nodes. Prerequisites You have installed the Red Hat OpenShift GitOps Operator in your cluster. You have access to the cluster with cluster-admin privileges. Procedure Label an existing node as infrastructure node by running the following command: USD oc label node <node_name> node-role.kubernetes.io/infra= 1 1 Replace <node_name> with the name of the node you want to label as infrastructure node. Example output node/<node_name> labeled Edit the Red Hat OpenShift GitOps Subscription resource by running the following command: USD oc -n openshift-gitops-operator edit subscription openshift-gitops-operator Add nodeSelector and tolerations to the spec.config field in the Subscription resource: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: config: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: 2 - key: node-role.kubernetes.io/infra operator: Exists effect: NoSchedule 1 This ensures that the operator pod is only scheduled on an infrastructure node. 2 This ensures that the pod is accepted by the infrastructure node. Example output subscription.operators.coreos.com/openshift-gitops-operator edited Verify that the GitOps Operator pod is running on the infrastructure node by running the following command: USD oc -n openshift-gitops-operator get po -owide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES openshift-gitops-operator-controller-manager-abcd 2/2 Running 0 11m 94.142.44.126 <node_name> <none> <none> 1 1 Ensure that the listed <node_name> is the node with the node-role.kubernetes.io/infra label. 1.3. Additional resources For more information about taints and tolerations, see Controlling pod placement using node taints . For more information about infrastructure machine sets, see Creating infrastructure machine sets .
[ "oc label node <node-name> node-role.kubernetes.io/infra=", "oc edit gitopsservice -n openshift-gitops", "apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true", "oc adm taint nodes -l node-role.kubernetes.io/infra infra=reserved:NoSchedule infra=reserved:NoExecute", "spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved", "oc label node <node_name> node-role.kubernetes.io/infra= 1", "node/<node_name> labeled", "oc -n openshift-gitops-operator edit subscription openshift-gitops-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: config: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: 2 - key: node-role.kubernetes.io/infra operator: Exists effect: NoSchedule", "subscription.operators.coreos.com/openshift-gitops-operator edited", "oc -n openshift-gitops-operator get po -owide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES openshift-gitops-operator-controller-manager-abcd 2/2 Running 0 11m 94.142.44.126 <node_name> <none> <none> 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/gitops_workloads_on_infrastructure_nodes/running-gitops-control-plane-workloads-on-infrastructure-nodes
Chapter 4. Decision environments
Chapter 4. Decision environments Decision environments are a container image to run Ansible rulebooks. They create a common language for communicating automation dependencies, and provide a standard way to build and distribute the automation environment. The default decision environment is found in the Ansible-Rulebook . To create your own decision environment refer to Building a custom decision environment for Event-Driven Ansible within Ansible Automation Platform . 4.1. Setting up a new decision environment The following steps describe how to import a decision environment into your Event-Driven Ansible controller Dashboard. Prerequisites You are logged in to the Event-Driven Ansible controller Dashboard as a Content Consumer. You have set up a credential, if necessary. For more information, see the Setting up credentials section. You have pushed a decision environment image to an image repository or you chose to use the image de-supported provided at registry.redhat.io . Procedure Navigate to the Event-Driven Ansible controller Dashboard. From the navigation panel, select Decision Environments . Insert the following: Name Insert the name. Description This field is optional. Image This is the full image location, including the container registry, image name, and version tag. Credential This field is optional. This is the token needed to utilize the decision environment image. Select Create decision environment . Your decision environment is now created and can be managed on the Decision Environments screen. After saving the new decision environment, the decision environment's details page is displayed. From there or the Decision Environments list view, you can edit or delete it. 4.2. Building a custom decision environment for Event-Driven Ansible within Ansible Automation Platform Use the following instructions if you need a custom decision environment to provide a custom maintained or third-party event source plugin that is not available in the default decision environment. Prerequisites Ansible Automation Platform > = 2.4 Event-Driven Ansible Ansible Builder > = 3.0 Procedure Add the de-supported decision environment. This image is built from a base image provided by Red Hat called de-minimal . Note Red Hat recommends using de-minimal as the base image with Ansible Builder to build your custom decision environments. The following is an example of the Ansible Builder definition file that uses de-minimal as a base image to build a custom decision environment with the ansible.eda collection: version: 3 images: base_image: name: 'registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel8:latest' dependencies: galaxy: collections: - ansible.eda python_interpreter: package_system: "python39" options: package_manager_path: /usr/bin/microdnf Additionally, if you need other Python packages or RPMs, you can add the following to a single definition file: version: 3 images: base_image: name: 'registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel8:latest' dependencies: galaxy: collections: - ansible.eda python: - six - psutil system: - iputils [platform:rpm] python_interpreter: package_system: "python39" options: package_manager_path: /usr/bin/microdnf
[ "version: 3 images: base_image: name: 'registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel8:latest' dependencies: galaxy: collections: - ansible.eda python_interpreter: package_system: \"python39\" options: package_manager_path: /usr/bin/microdnf", "version: 3 images: base_image: name: 'registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel8:latest' dependencies: galaxy: collections: - ansible.eda python: - six - psutil system: - iputils [platform:rpm] python_interpreter: package_system: \"python39\" options: package_manager_path: /usr/bin/microdnf" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/event-driven_ansible_controller_user_guide/eda-decision-environments
4.7. Synchronizing Configuration Files
4.7. Synchronizing Configuration Files After configuring the primary LVS router, there are several configuration files that must be copied to the backup LVS router before you start LVS. These files include: /etc/sysconfig/ha/lvs.cf - the configuration file for the LVS routers. /etc/sysctl - the configuration file that, among other things, turns on packet forwarding in the kernel. /etc/sysconfig/iptables - If you are using firewall marks, you should synchronize one of these files based on which network packet filter you are using. Important The /etc/sysctl.conf and /etc/sysconfig/iptables files do not change when you configure LVS using the Piranha Configuration Tool . 4.7.1. Synchronizing lvs.cf Anytime the LVS configuration file, /etc/sysconfig/ha/lvs.cf , is created or updated, you must copy it to the backup LVS router node. Warning Both the active and backup LVS router nodes must have identical lvs.cf files. Mismatched LVS configuration files between the LVS router nodes can prevent failover. The best way to do this is to use the scp command. Important To use scp the sshd must be running on the backup router, see Section 2.1, "Configuring Services on the LVS Routers" for details on how to properly configure the necessary services on the LVS routers. Issue the following command as the root user from the primary LVS router to sync the lvs.cf files between the router nodes: scp /etc/sysconfig/ha/lvs.cf n.n.n.n :/etc/sysconfig/ha/lvs.cf In the command, replace n.n.n.n with the real IP address of the backup LVS router.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-sync-vsa
Chapter 18. Virtual Networking
Chapter 18. Virtual Networking This chapter introduces the concepts needed to create, start, stop, remove, and modify virtual networks with libvirt. Additional information can be found in the libvirt reference chapter 18.1. Virtual Network Switches Libvirt virtual networking uses the concept of a virtual network switch . A virtual network switch is a software construct that operates on a host physical machine server, to which virtual machines (guests) connect. The network traffic for a guest is directed through this switch: Figure 18.1. Virtual network switch with two guests Linux host physical machine servers represent a virtual network switch as a network interface. When the libvirtd daemon ( libvirtd ) is first installed and started, the default network interface representing the virtual network switch is virbr0 . This virbr0 interface can be viewed with the ip command like any other interface:
[ "ip addr show virbr0 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-virtual_networking
Chapter 10. Supplementary Workstation Variant
Chapter 10. Supplementary Workstation Variant The following table lists all the packages in the Supplementary Workstation variant. For more information about support scope, see the Scope of Coverage Details document. Package Core Package? License acroread No Commercial acroread-plugin No Commercial chromium-browser No BSD and LGPLv2+ flash-plugin No Commercial java-1.5.0-ibm No IBM Binary Code License java-1.5.0-ibm-demo No IBM Binary Code License java-1.5.0-ibm-devel No IBM Binary Code License java-1.5.0-ibm-javacomm No IBM Binary Code License java-1.5.0-ibm-jdbc No IBM Binary Code License java-1.5.0-ibm-plugin No IBM Binary Code License java-1.5.0-ibm-src No IBM Binary Code License java-1.6.0-ibm No IBM Binary Code License java-1.6.0-ibm-demo No IBM Binary Code License java-1.6.0-ibm-devel No IBM Binary Code License java-1.6.0-ibm-javacomm No IBM Binary Code License java-1.6.0-ibm-jdbc No IBM Binary Code License java-1.6.0-ibm-plugin No IBM Binary Code License java-1.6.0-ibm-src No IBM Binary Code License java-1.7.1-ibm No IBM Binary Code License java-1.7.1-ibm-demo No IBM Binary Code License java-1.7.1-ibm-devel No IBM Binary Code License java-1.7.1-ibm-jdbc No IBM Binary Code License java-1.7.1-ibm-plugin No IBM Binary Code License java-1.7.1-ibm-src No IBM Binary Code License java-1.8.0-ibm No IBM Binary Code License java-1.8.0-ibm-demo No IBM Binary Code License java-1.8.0-ibm-devel No IBM Binary Code License java-1.8.0-ibm-jdbc No IBM Binary Code License java-1.8.0-ibm-plugin No IBM Binary Code License java-1.8.0-ibm-src No IBM Binary Code License kmod-kspiceusb-rhel60 No GPLv2 spice-usb-share No Redistributable, no modification permitted system-switch-java No GPLv2+ virtio-win No Red Hat Proprietary and GPLv2
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-supplementary-workstation-variant
2.11.3. Finding Hierarchies
2.11.3. Finding Hierarchies It is recommended that you mount hierarchies under the /cgroup/ directory. Assuming this is the case on your system, list or browse the contents of that directory to obtain a list of hierarchies. If the tree utility is installed on your system, run it to obtain an overview of all hierarchies and the cgroups within them:
[ "~]USD tree /cgroup" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/finding_hierarchies
Chapter 5. Managing your JBoss EAP server installation using the Management CLI
Chapter 5. Managing your JBoss EAP server installation using the Management CLI In JBoss EAP 8.0, we have integrated the jboss-eap-installation-manager into the JBoss EAP server management model, allowing you to update and revert your remote server installations without the need to log in to the remote machine and use the jboss-eap-installation-manager from the operating system command line. Note If you are updating or reverting a local JBoss EAP installation, the jboss-eap-installation-manager is recommended instead of the Management CLI operations. However, you cannot use the jboss-eap-installation-manager to update or revert a remote JBoss EAP installation. In this case, use the Management CLI. 5.1. Prerequisite You have a JBoss EAP installed. 5.2. Updating JBoss EAP running as a stand-alone server or a managed domain using the Management CLI You can update your JBoss EAP server installation in a stand-alone server or a managed domain using the JBoss EAP Management CLI . The following steps outline the phases of the update process. List update: Before preparing the server to be updated, the installer update command will check for all available updates and provide a list of updates ready to be applied to your JBoss EAP instance. Prepare update: After confirming the available updates, the command will prepare a candidate server ready to be applied to your current installation. The candidate server is prepared in the server temporal directory, which is the directory represented by the file system path jboss.domain.temp.dir in a managed domain or jboss.server.temp.dir in stand-alone server mode. Once the preparation phase is completed, no further server preparations can be made. However, at any time, you can remove the prepared candidate server by cleaning up the manager cache. This action clears the cache and allows the preparation of a different installation, enabling you to start afresh. For more information, see Cleaning the installer . Apply update: Once the candidate server is created, you can apply it to your instance by restarting your JBoss EAP server. Procedure Launch the JBoss EAP Management CLI. EAP_HOME/bin/jboss-cli.sh Update JBoss EAP: Update JBoss EAP in a stand-alone server. [standalone@localhost:9990 /] installer update Update JBoss EAP in a managed domain [domain@localhost:9990 /] installer update --host=target-host Restart your JBoss EAP server to complete the update process: Note You must ensure that no other processes are launched from the JBOSS_EAP/bin folder, such as JBOSS_EAP/bin/jconsole.sh and JBOSS_EAP/bin/appclient.sh , when restarting the server with the --perform-installation option. This precaution prevents conflicts in writing files that might be in use by other processes during the server's update. Restart your JBoss EAP server in a stand-alone server. [standalone@localhost:9990 /] shutdown --perform-installation Restart your JBoss EAP server in a managed domain. [domain@localhost:9990 /] shutdown --host=target-host --perform-installation Note For more information about additional command options use the help command. Additional resources JBoss EAP Management CLI overview . 5.3. Updating your JBoss EAP server offline using the Management CLI The following example describes how to use the Management CLI to update JBoss EAP offline in a stand-alone server and a managed domain. This is useful in scenarios where the target server installation lacks access to external Maven repositories. You can use the Management CLI to update your server. To do so, you need to specify the location of the Maven repository that contains the required artifacts to update your server. You can download the Maven repository for your update from the Red Hat Customer Portal Prerequisite You have the Maven archive repository containing the required artifacts locally on your machine. Procedure Launch the Management CLI: EAP_HOME/bin/jboss-cli.sh Update JBoss EAP offline: Update JBoss EAP offline in a stand-alone server: [standalone@localhost:9990 /] installer update --maven-repo-files=<An absolute or a relative path pointing to the local archive file that contains a maven repository> Update JBoss EAP offline in a managed domain: [domain@localhost:9990 /] installer update --host=target-host --maven-repo-files=<An absolute or a relative path pointing to the local archive file that contains a maven repository> Restart your JBoss EAP server to complete the update process: Note You must ensure that no other processes are launched from the JBOSS_EAP/bin folder, such as JBOSS_EAP/bin/jconsole.sh and JBOSS_EAP/bin/appclient.sh , when restarting the server with the --perform-installation option. This precaution prevents conflicts in writing files that might be in use by other processes during the server's update. Restart your JBoss EAP server in a stand-alone server: [standalone@localhost:9990 /] shutdown --perform-installation Restart your JBoss EAP server in a managed domain: [domain@localhost:9990 /] shutdown --host=target-host --perform-installation Additional resources Get Started with the Management CLI . Red Hat Customer Portal . Applying one-offs patches to your JBoss EAP 8.0 server .
[ "EAP_HOME/bin/jboss-cli.sh", "[standalone@localhost:9990 /] installer update", "[domain@localhost:9990 /] installer update --host=target-host", "[standalone@localhost:9990 /] shutdown --perform-installation", "[domain@localhost:9990 /] shutdown --host=target-host --perform-installation", "EAP_HOME/bin/jboss-cli.sh", "[standalone@localhost:9990 /] installer update --maven-repo-files=<An absolute or a relative path pointing to the local archive file that contains a maven repository>", "[domain@localhost:9990 /] installer update --host=target-host --maven-repo-files=<An absolute or a relative path pointing to the local archive file that contains a maven repository>", "[standalone@localhost:9990 /] shutdown --perform-installation", "[domain@localhost:9990 /] shutdown --host=target-host --perform-installation" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/assembly_updating-a-jboss-eap-server-installation_default
function::set_kernel_char
function::set_kernel_char Name function::set_kernel_char - Writes a char value to kernel memory Synopsis Arguments addr The kernel address to write the char to val The char which is to be written Description Writes the char value to a given kernel memory address. Reports an error when writing to the given address fails. Requires the use of guru mode (-g).
[ "set_kernel_char(addr:long,val:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-set-kernel-char
Chapter 6. Configuration from sample environment file
Chapter 6. Configuration from sample environment file The environment file that you created in Creating the custom back end environment file configures the Block Storage service to use two NetApp back ends. The following snippet displays the relevant settings:
[ "enabled_backends = netapp1,netapp2 [netapp1] volume_backend_name=netapp_1 volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_login=root netapp_storage_protocol=iscsi netapp_password=p@USDUSDw0rd netapp_storage_family=ontap_7mode netapp_server_port=80 netapp_server_hostname=10.35.64.11 [netapp2] volume_backend_name=netapp_2 volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver netapp_login=root netapp_storage_protocol=iscsi netapp_password=p@USDUSDw0rd netapp_storage_family=ontap_7mode netapp_server_port=80 netapp_server_hostname=10.35.64.11" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/custom_block_storage_back_end_deployment_guide/ref_configuration-sample-environment-file_custom-cinder-back-end
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments. Use the information in this guide to plan your Red Hat Ansible Automation Platform installation.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/planning_your_installation/pr01
Chapter 8. Secret [v1]
Chapter 8. Secret [v1] Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data object (string) Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https://tools.ietf.org/html/rfc4648#section-4 immutable boolean Immutable, if set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata stringData object (string) stringData allows specifying non-binary secret data in string form. It is provided as a write-only input field for convenience. All keys and values are merged into the data field on write, overwriting any existing values. The stringData field is never output when reading from the API. type string Used to facilitate programmatic handling of secret data. More info: https://kubernetes.io/docs/concepts/configuration/secret/#secret-types 8.2. API endpoints The following API endpoints are available: /api/v1/secrets GET : list or watch objects of kind Secret /api/v1/watch/secrets GET : watch individual changes to a list of Secret. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/secrets DELETE : delete collection of Secret GET : list or watch objects of kind Secret POST : create a Secret /api/v1/watch/namespaces/{namespace}/secrets GET : watch individual changes to a list of Secret. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/secrets/{name} DELETE : delete a Secret GET : read the specified Secret PATCH : partially update the specified Secret PUT : replace the specified Secret /api/v1/watch/namespaces/{namespace}/secrets/{name} GET : watch changes to an object of kind Secret. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 8.2.1. /api/v1/secrets Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Secret Table 8.2. HTTP responses HTTP code Reponse body 200 - OK SecretList schema 401 - Unauthorized Empty 8.2.2. /api/v1/watch/secrets Table 8.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Secret. deprecated: use the 'watch' parameter with a list operation instead. Table 8.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /api/v1/namespaces/{namespace}/secrets Table 8.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Secret Table 8.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 8.8. Body parameters Parameter Type Description body DeleteOptions schema Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Secret Table 8.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK SecretList schema 401 - Unauthorized Empty HTTP method POST Description create a Secret Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body Secret schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK Secret schema 201 - Created Secret schema 202 - Accepted Secret schema 401 - Unauthorized Empty 8.2.4. /api/v1/watch/namespaces/{namespace}/secrets Table 8.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Secret. deprecated: use the 'watch' parameter with a list operation instead. Table 8.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /api/v1/namespaces/{namespace}/secrets/{name} Table 8.18. Global path parameters Parameter Type Description name string name of the Secret namespace string object name and auth scope, such as for teams and projects Table 8.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Secret Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.21. Body parameters Parameter Type Description body DeleteOptions schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Secret Table 8.23. HTTP responses HTTP code Reponse body 200 - OK Secret schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Secret Table 8.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.25. Body parameters Parameter Type Description body Patch schema Table 8.26. HTTP responses HTTP code Reponse body 200 - OK Secret schema 201 - Created Secret schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Secret Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body Secret schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK Secret schema 201 - Created Secret schema 401 - Unauthorized Empty 8.2.6. /api/v1/watch/namespaces/{namespace}/secrets/{name} Table 8.30. Global path parameters Parameter Type Description name string name of the Secret namespace string object name and auth scope, such as for teams and projects Table 8.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Secret. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_apis/secret-v1
Chapter 1. Overview of Streams for Apache Kafka
Chapter 1. Overview of Streams for Apache Kafka Streams for Apache Kafka supports highly scalable, distributed, and high-performance data streaming based on the Apache Kafka project. The main components comprise: Kafka Broker Messaging broker responsible for delivering records from producing clients to consuming clients. Kafka Streams API API for writing stream processor applications. Producer and Consumer APIs Java-based APIs for producing and consuming messages to and from Kafka brokers. Kafka Bridge Streams for Apache Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. Kafka Connect A toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka MirrorMaker Replicates data between two Kafka clusters, within or across data centers. Kafka Exporter An exporter used in the extraction of Kafka metrics data for monitoring. A cluster of Kafka brokers is the hub connecting all these components. Figure 1.1. Streams for Apache Kafka architecture 1.1. Using the Kafka Bridge to connect with a Kafka cluster You can use the Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol. When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface. Additional resources For information on installing and using the Kafka Bridge, see Using the Kafka Bridge . 1.2. Document conventions User-replaced values User-replaced values, also known as replaceables , are shown in with angle brackets (< >). Underscores ( _ ) are used for multi-word values. If the value refers to code or commands, monospace is also used. For example, the following code shows that <bootstrap_address> and <topic_name> must be replaced with your own address and topic name: bin/kafka-console-consumer.sh --bootstrap-server <broker_host>:<port> --topic <topic_name> --from-beginning
[ "bin/kafka-console-consumer.sh --bootstrap-server <broker_host>:<port> --topic <topic_name> --from-beginning" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/overview-str
Chapter 32. The XPath Language
Chapter 32. The XPath Language Abstract When processing XML messages, the XPath language enables you to select part of a message, by specifying an XPath expression that acts on the message's Document Object Model (DOM). You can also define XPath predicates to test the contents of an element or an attribute. 32.1. Java DSL Basic expressions You can use xpath(" Expression ") to evaluate an XPath expression on the current exchange (where the XPath expression is applied to the body of the current In message). The result of the xpath() expression is an XML node (or node set, if more than one node matches). For example, to extract the contents of the /person/name element from the current In message body and use it to set a header named user , you could define a route like the following: Instead of specifying xpath() as an argument to setHeader() , you can use the fluent builder xpath() command - for example: If you want to convert the result to a specific type, specify the result type as the second argument of xpath() . For example, to specify explicitly that the result type is String : Namespaces Typically, XML elements belong to a schema, which is identified by a namespace URI. When processing documents like this, it is necessary to associate namespace URIs with prefixes, so that you can identify element names unambiguously in your XPath expressions. Apache Camel provides the helper class, org.apache.camel.builder.xml.Namespaces , which enables you to define associations between namespaces and prefixes. For example, to associate the prefix, cust , with the namespace, http://acme.com/customer/record , and then extract the contents of the element, /cust:person/cust:name , you could define a route like the following: Where you make the namespace definitions available to the xpath() expression builder by passing the Namespaces object, ns , as an additional argument. If you need to define multiple namespaces, use the Namespace.add() method, as follows: If you need to specify the result type and define namespaces, you can use the three-argument form of xpath() , as follows: Auditing namespaces One of the most frequent problems that can occur when using XPath expressions is that there is a mismatch between the namespaces appearing in the incoming messages and the namespaces used in the XPath expression. To help you troubleshoot this kind of problem, the XPath language supports an option to dump all of the namespaces from all of the incoming messages into the system log. To enable namespace logging at the INFO log level, enable the logNamespaces option in the Java DSL, as follows: Alternatively, you could configure your logging system to enable TRACE level logging on the org.apache.camel.builder.xml.XPathBuilder logger. When namespace logging is enabled, you will see log messages like the following for each processed message: 32.2. XML DSL Basic expressions To evaluate an XPath expression in the XML DSL, put the XPath expression inside an xpath element. The XPath expression is applied to the body of the current In message and returns an XML node (or node set). Typically, the returned XML node is automatically converted to a string. For example, to extract the contents of the /person/name element from the current In message body and use it to set a header named user , you could define a route like the following: If you want to convert the result to a specific type, specify the result type by setting the resultType attribute to a Java type name (where you must specify the fully-qualified type name). For example, to specify explicitly that the result type is java.lang.String (you can omit the java.lang. prefix here): Namespaces When processing documents whose elements belong to one or more XML schemas, it is typically necessary to associate namespace URIs with prefixes, so that you can identify element names unambiguously in your XPath expressions. It is possible to use the standard XML mechanism for associating prefixes with namespace URIs. That is, you can set an attribute like this: xmlns: Prefix =" NamespaceURI " . For example, to associate the prefix, cust , with the namespace, http://acme.com/customer/record , and then extract the contents of the element, /cust:person/cust:name , you could define a route like the following: Auditing namespaces One of the most frequent problems that can occur when using XPath expressions is that there is a mismatch between the namespaces appearing in the incoming messages and the namespaces used in the XPath expression. To help you troubleshoot this kind of problem, the XPath language supports an option to dump all of the namespaces from all of the incoming messages into the system log. To enable namespace logging at the INFO log level, enable the logNamespaces option in the XML DSL, as follows: Alternatively, you could configure your logging system to enable TRACE level logging on the org.apache.camel.builder.xml.XPathBuilder logger. When namespace logging is enabled, you will see log messages like the following for each processed message: 32.3. XPath Injection Parameter binding annotation When using Apache Camel bean integration to invoke a method on a Java bean, you can use the @XPath annotation to extract a value from the exchange and bind it to a method parameter. For example, consider the following route fragment, which invokes the credit method on an AccountService object: The credit method uses parameter binding annotations to extract relevant data from the message body and inject it into its parameters, as follows: For more information, see Bean Integration in the Apache Camel Development Guide on the customer portal. Namespaces Table 32.1, "Predefined Namespaces for @XPath" shows the namespaces that are predefined for XPath. You can use these namespace prefixes in the XPath expression that appears in the @XPath annotation. Table 32.1. Predefined Namespaces for @XPath Namespace URI Prefix http://www.w3.org/2001/XMLSchema xsd http://www.w3.org/2003/05/soap-envelope soap Custom namespaces You can use the @NamespacePrefix annotation to define custom XML namespaces. Invoke the @NamespacePrefix annotation to initialize the namespaces argument of the @XPath annotation. The namespaces defined by @NamespacePrefix can then be used in the @XPath annotation's expression value. For example, to associate the prefix, ex , with the custom namespace, http://fusesource.com/examples , invoke the @XPath annotation as follows: 32.4. XPath Builder Overview The org.apache.camel.builder.xml.XPathBuilder class enables you to evaluate XPath expressions independently of an exchange. That is, if you have an XML fragment from any source, you can use XPathBuilder to evaluate an XPath expression on the XML fragment. Matching expressions Use the matches() method to check whether one or more XML nodes can be found that match the given XPath expression. The basic syntax for matching an XPath expression using XPathBuilder is as follows: Where the given expression, Expression , is evaluated against the XML fragment, XMLString , and the result is true, if at least one node is found that matches the expression. For example, the following example returns true , because the XPath expression finds a match in the xyz attribute. Evaluating expressions Use the evaluate() method to return the contents of the first node that matches the given XPath expression. The basic syntax for evaluating an XPath expression using XPathBuilder is as follows: You can also specify the result type by passing the required type as the second argument to evaluate() - for example: 32.5. Enabling Saxon Prerequisites A prerequisite for using the Saxon parser is that you add a dependency on the camel-saxon artifact (either adding this dependency to your Maven POM, if you use Maven, or adding the camel-saxon-2.23.2.fuse-7_13_0-00013-redhat-00001.jar file to your classpath, otherwise). Using the Saxon parser in Java DSL In Java DSL, the simplest way to enable the Saxon parser is to call the saxon() fluent builder method. For example, you could invoke the Saxon parser as shown in the following example: Using the Saxon parser in XML DSL In XML DSL, the simplest way to enable the Saxon parser is to set the saxon attribute to true in the xpath element. For example, you could invoke the Saxon parser as shown in the following example: Programming with Saxon If you want to use the Saxon XML parser in your application code, you can create an instance of the Saxon transformer factory explicitly using the following code: On the other hand, if you prefer to use the generic JAXP API to create a transformer factory instance, you must first set the javax.xml.transform.TransformerFactory property in the ESBInstall /etc/system.properties file, as follows: You can then instantiate the Saxon factory using the generic JAXP API, as follows: If your application depends on any third-party libraries that use Saxon, it might be necessary to use the second, generic approach. Note The Saxon library must be installed in the container as the OSGi bundle, net.sf.saxon/saxon9he (normally installed by default). In versions of Fuse ESB prior to 7.1, it is not possible to load Saxon using the generic JAXP API. 32.6. Expressions Result type By default, an XPath expression returns a list of one or more XML nodes, of org.w3c.dom.NodeList type. You can use the type converter mechanism to convert the result to a different type, however. In the Java DSL, you can specify the result type in the second argument of the xpath() command. For example, to return the result of an XPath expression as a String : In the XML DSL, you can specify the result type in the resultType attribute, as follows: Patterns in location paths You can use the following patterns in XPath location paths: /people/person The basic location path specifies the nested location of a particular element. That is, the preceding location path would match the person element in the following XML fragment: Note that this basic pattern can match multiple nodes - for example, if there is more than one person element inside the people element. /name/text() If you just want to access the text inside by the element, append /text() to the location path, otherwise the node includes the element's start and end tags (and these tags would be included when you convert the node to a string). /person/telephone/@isDayTime To select the value of an attribute, AttributeName , use the syntax @ AttributeName . For example, the preceding location path returns true when applied to the following XML fragment: * A wildcard that matches all elements in the specified scope. For example, /people/person/\* matches all the child elements of person . @* A wildcard that matches all attributes of the matched elements. For example, /person/name/@\* matches all attributes of every matched name element. // Match the location path at every nesting level. For example, the //name pattern matches every name element highlighted in the following XML fragment: .. Selects the parent of the current context node. Not normally useful in the Apache Camel XPath language, because the current context node is the document root, which has no parent. node() Match any kind of node. text() Match a text node. comment() Match a comment node. processing-instruction() Match a processing-instruction node. Predicate filters You can filter the set of nodes matching a location path by appending a predicate in square brackets, [ Predicate ] . For example, you can select the N th node from the list of matches by appending [ N ] to a location path. The following expression selects the first matching person element: The following expression selects the second-last person element: You can test the value of attributes in order to select elements with particular attribute values. The following expression selects the name elements, whose surname attribute is either Strachan or Davies: You can combine predicate expressions using any of the conjunctions and , or , not() , and you can compare expressions using the comparators, = , != , > , >= , < , ⇐ (in practice, the less-than symbol must be replaced by the < entity). You can also use XPath functions in the predicate filter. Axes When you consider the structure of an XML document, the root element contains a sequence of children, and some of those child elements contain further children, and so on. Looked at in this way, where nested elements are linked together by the child-of relationship, the whole XML document has the structure of a tree . Now, if you choose a particular node in this element tree (call it the context node ), you might want to refer to different parts of the tree relative to the chosen node. For example, you might want to refer to the children of the context node, to the parent of the context node, or to all of the nodes that share the same parent as the context node ( sibling nodes ). An XPath axis is used to specify the scope of a node match, restricting the search to a particular part of the node tree, relative to the current context node. The axis is attached as a prefix to the node name that you want to match, using the syntax, AxisType :: MatchingNode . For example, you can use the child:: axis to search the children of the current context node, as follows: The context node of child::item is the items element that is selected by the path, /invoice/items . The child:: axis restricts the search to the children of the context node, items , so that child::item matches the children of items that are named item . As a matter of fact, the child:: axis is the default axis, so the preceding example can be written equivalently as: But there several other axes (13 in all), some of which you have already seen in abbreviated form: @ is an abbreviation of attribute:: , and // is an abbreviation of descendant-or-self:: . The full list of axes is as follows (for details consult the reference below): ancestor ancestor-or-self attribute child descendant descendant-or-self following following-sibling namespace parent preceding preceding-sibling self Functions XPath provides a small set of standard functions, which can be useful when evaluating predicates. For example, to select the last matching node from a node set, you can use the last() function, which returns the index of the last node in a node set, as follows: Where the preceding example selects the last person element in a sequence (in document order). For full details of all the functions that XPath provides, consult the reference below. Reference For full details of the XPath grammar, see the XML Path Language, Version 1.0 specification. 32.7. Predicates Basic predicates You can use xpath in the Java DSL or the XML DSL in a context where a predicate is expected - for example, as the argument to a filter() processor or as the argument to a when() clause. For example, the following route filters incoming messages, allowing a message to pass, only if the /person/city element contains the value, London : The following route evaluates the XPath predicate in a when() clause: XPath predicate operators The XPath language supports the standard XPath predicate operators, as shown in Table 32.2, "Operators for the XPath Language" . Table 32.2. Operators for the XPath Language Operator Description = Equals. != Not equal to. > Greater than. >= Greater than or equals. < Less than. ⇐ Less than or equals. and Combine two predicates with logical and . or Combine two predicates with logical inclusive or . not() Negate predicate argument. 32.8. Using Variables and Functions Evaluating variables in a route When evaluating XPath expressions inside a route, you can use XPath variables to access the contents of the current exchange, as well as O/S environment variables and Java system properties. The syntax to access a variable value is USD VarName or USD Prefix : VarName , if the variable is accessed through an XML namespace. For example, you can access the In message's body as USDin:body and the In message's header value as USDin: HeaderName . O/S environment variables can be accessed as USDenv: EnvVar and Java system properties can be accessed as USDsystem: SysVar . In the following example, the first route extracts the value of the /person/city element and inserts it into the city header. The second route filters exchanges using the XPath expression, USDin:city = 'London' , where the USDin:city variable is replaced by the value of the city header. Evaluating functions in a route In addition to the standard XPath functions, the XPath language defines additional functions. These additional functions (which are listed in Table 32.4, "XPath Custom Functions" ) can be used to access the underlying exchange, to evaluate a simple expression or to look up a property in the Apache Camel property placeholder component. For example, the following example uses the in:header() function and the in:body() function to access a head and the body from the underlying exchange: Notice the similarity between theses functions and the corresponding in: HeaderName or in:body variables. The functions have a slightly different syntax however: in:header( 'HeaderName' ) instead of in: HeaderName ; and in:body() instead of in:body . Evaluating variables in XPathBuilder You can also use variables in expressions that are evaluated using the XPathBuilder class. In this case, you cannot use variables such as USDin:body or USDin: HeaderName , because there is no exchange object to evaluate against. But you can use variables that are defined inline using the variable( Name , Value ) fluent builder method. For example, the following XPathBuilder construction evaluates the USDtest variable, which is defined to have the value, London : Note that variables defined in this way are automatically entered into the global namespace (for example, the variable, USDtest , uses no prefix). 32.9. Variable Namespaces Table of namespaces Table 32.3, "XPath Variable Namespaces" shows the namespace URIs that are associated with the various namespace prefixes. Table 32.3. XPath Variable Namespaces Namespace URI Prefix Description http://camel.apache.org/schema/spring None Default namespace (associated with variables that have no namespace prefix). http://camel.apache.org/xml/in/ in Used to reference header or body of the current exchange's In message. http://camel.apache.org/xml/out/ out Used to reference header or body of the current exchange's Out message. http://camel.apache.org/xml/functions/ functions Used to reference some custom functions. http://camel.apache.org/xml/variables/environment-variables env Used to reference O/S environment variables. http://camel.apache.org/xml/variables/system-properties system Used to reference Java system properties. http://camel.apache.org/xml/variables/exchange-property Undefined Used to reference exchange properties. You must define your own prefix for this namespace. 32.10. Function Reference Table of custom functions Table 32.4, "XPath Custom Functions" shows the custom functions that you can use in Apache Camel XPath expressions. These functions can be used in addition to the standard XPath functions. Table 32.4. XPath Custom Functions Function Description in:body() Returns the In message body. in:header( HeaderName ) Returns the In message header with name, HeaderName . out:body() Returns the Out message body. out:header( HeaderName ) Returns the Out message header with name, HeaderName . function:properties( PropKey ) Looks up a property with the key, PropKey . function:simple( SimpleExp ) Evaluates the specified simple expression, SimpleExp .
[ "from(\"queue:foo\") .setHeader(\"user\", xpath(\"/person/name/text()\")) .to(\"direct:tie\");", "from(\"queue:foo\") .setHeader(\"user\").xpath(\"/person/name/text()\") .to(\"direct:tie\");", "xpath(\"/person/name/text()\", String.class)", "import org.apache.camel.builder.xml.Namespaces; Namespaces ns = new Namespaces(\"cust\", \"http://acme.com/customer/record\"); from(\"queue:foo\") .setHeader(\"user\", xpath(\"/cust:person/cust:name/text()\", ns )) .to(\"direct:tie\");", "import org.apache.camel.builder.xml.Namespaces; Namespaces ns = new Namespaces(\"cust\", \"http://acme.com/customer/record\"); ns.add(\"inv\", \"http://acme.com/invoice\"); ns.add(\"xsi\", \"http://www.w3.org/2001/XMLSchema-instance\");", "xpath(\"/person/name/text()\", String.class, ns)", "xpath(\"/foo:person/@id\", String.class).logNamespaces()", "2012-01-16 13:23:45,878 [stSaxonWithFlag] INFO XPathBuilder - Namespaces discovered in message: {xmlns:a=[http://apache.org/camel], DEFAULT=[http://apache.org/default], xmlns:b=[http://apache.org/camelA, http://apache.org/camelB]}", "<beans ...> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"queue:foo\"/> <setHeader headerName=\"user\"> <xpath>/person/name/text()</xpath> </setHeader> <to uri=\"direct:tie\"/> </route> </camelContext> </beans>", "<xpath resultType=\"String\">/person/name/text()</xpath>", "<beans ...> <camelContext xmlns=\"http://camel.apache.org/schema/spring\" xmlns:cust=\"http://acme.com/customer/record\" > <route> <from uri=\"queue:foo\"/> <setHeader headerName=\"user\"> <xpath>/cust:person/cust:name/text()</xpath> </setHeader> <to uri=\"direct:tie\"/> </route> </camelContext> </beans>", "<xpath logNamespaces=\"true\" resultType=\"String\">/foo:person/@id</xpath>", "2012-01-16 13:23:45,878 [stSaxonWithFlag] INFO XPathBuilder - Namespaces discovered in message: {xmlns:a=[http://apache.org/camel], DEFAULT=[http://apache.org/default], xmlns:b=[http://apache.org/camelA, http://apache.org/camelB]}", "from(\"queue:payments\") .beanRef(\"accountService\",\"credit\")", "public class AccountService { public void credit( @XPath(\"/transaction/transfer/receiver/text()\") String name, @XPath(\"/transaction/transfer/amount/text()\") String amount ) { } }", "public class AccountService { public void credit( @XPath( value = \"/ex:transaction/ex:transfer/ex:receiver/text()\", namespaces = @NamespacePrefix( prefix = \"ex\", uri = \"http://fusesource.com/examples\" ) ) String name, @XPath( value = \"/ex:transaction/ex:transfer/ex:amount/text()\", namespaces = @NamespacePrefix( prefix = \"ex\", uri = \"http://fusesource.com/examples\" ) ) String amount, ) { } }", "boolean matches = XPathBuilder .xpath(\" Expression \") .matches(CamelContext, \" XMLString \");", "boolean matches = XPathBuilder .xpath(\"/foo/bar/@xyz\") .matches(getContext(), \"<foo><bar xyz='cheese'/></foo>\"));", "String nodeValue = XPathBuilder .xpath(\" Expression \") .evaluate(CamelContext, \" XMLString \");", "String name = XPathBuilder .xpath(\"foo/bar\") .evaluate(context, \"<foo><bar>cheese</bar></foo>\", String.class); Integer number = XPathBuilder .xpath(\"foo/bar\") .evaluate(context, \"<foo><bar>123</bar></foo>\", Integer.class); Boolean bool = XPathBuilder .xpath(\"foo/bar\") .evaluate(context, \"<foo><bar>true</bar></foo>\", Boolean.class);", "// Java // create a builder to evaluate the xpath using saxon XPathBuilder builder = XPathBuilder.xpath(\"tokenize(/foo/bar, '_')[2]\").saxon(); // evaluate as a String result String result = builder.evaluate(context, \"<foo><bar>abc_def_ghi</bar></foo>\");", "<xpath saxon=\"true\" resultType=\"java.lang.String\">current-dateTime()</xpath>", "// Java import javax.xml.transform.TransformerFactory; import net.sf.saxon.TransformerFactoryImpl; TransformerFactory saxonFactory = new net.sf.saxon.TransformerFactoryImpl();", "javax.xml.transform.TransformerFactory=net.sf.saxon.TransformerFactoryImpl", "// Java import javax.xml.transform.TransformerFactory; TransformerFactory factory = TransformerFactory.newInstance();", "xpath(\"/person/name/text()\", String.class)", "<xpath resultType=\"java.lang.String\">/person/name/text()</xpath>", "<people> <person>...</person> </people>", "<person> <telephone isDayTime=\"true\">1234567890</telephone> </person>", "<invoice> <person> < name .../> </person> </invoice> <person> < name .../> </person> < name .../>", "/people/person[1]", "/people/person[last()-1]", "/person/name[@surname=\"Strachan\" or @surname=\"Davies\"]", "/invoice/items/child::item", "/invoice/items/item", "/people/person[last()]", "from(\"direct:tie\") .filter().xpath(\"/person/city = 'London'\").to(\"file:target/messages/uk\");", "from(\"direct:tie\") .choice() .when(xpath(\"/person/city = 'London'\")).to(\"file:target/messages/uk\") .otherwise().to(\"file:target/messages/others\");", "from(\"file:src/data?noop=true\") .setHeader(\"city\").xpath(\"/person/city/text()\") .to(\"direct:tie\"); from(\"direct:tie\") .filter().xpath(\"USDin:city = 'London'\").to(\"file:target/messages/uk\");", "from(\"direct:start\").choice() .when().xpath(\"in:header('foo') = 'bar'\").to(\"mock:x\") .when().xpath(\"in:body() = '<two/>'\").to(\"mock:y\") .otherwise().to(\"mock:z\");", "String var = XPathBuilder.xpath(\"USDtest\") .variable(\"test\", \"London\") .evaluate(getContext(), \"<name>foo</name>\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/XPath
5.13. Setting and Controlling IP sets using iptables
5.13. Setting and Controlling IP sets using iptables The essential differences between firewalld and the iptables (and ip6tables ) services are: The iptables service stores configuration in /etc/sysconfig/iptables and /etc/sysconfig/ip6tables , while firewalld stores it in various XML files in /usr/lib/firewalld/ and /etc/firewalld/ . Note that the /etc/sysconfig/iptables file does not exist as firewalld is installed by default on Red Hat Enterprise Linux. With the iptables service , every single change means flushing all the old rules and reading all the new rules from /etc/sysconfig/iptables , while with firewalld there is no recreating of all the rules. Only the differences are applied. Consequently, firewalld can change the settings during runtime without existing connections being lost. Both use iptables tool to talk to the kernel packet filter. To use the iptables and ip6tables services instead of firewalld , first disable firewalld by running the following command as root : Then install the iptables-services package by entering the following command as root : The iptables-services package contains the iptables service and the ip6tables service. Then, to start the iptables and ip6tables services, enter the following commands as root : To enable the services to start on every system start, enter the following commands: The ipset utility is used to administer IP sets in the Linux kernel. An IP set is a framework for storing IP addresses, port numbers, IP and MAC address pairs, or IP address and port number pairs. The sets are indexed in such a way that very fast matching can be made against a set even when the sets are very large. IP sets enable simpler and more manageable configurations as well as providing performance advantages when using iptables . The iptables matches and targets referring to sets create references which protect the given sets in the kernel. A set cannot be destroyed while there is a single reference pointing to it. The use of ipset enables iptables commands, such as those below, to be replaced by a set: The set is created as follows: The set is then referenced in an iptables command as follows: If the set is used more than once a saving in configuration time is made. If the set contains many entries a saving in processing time is made.
[ "~]# systemctl disable firewalld ~]# systemctl stop firewalld", "~]# yum install iptables-services", "~]# systemctl start iptables ~]# systemctl start ip6tables", "~]# systemctl enable iptables ~]# systemctl enable ip6tables", "~]# iptables -A INPUT -s 10.0.0.0/8 -j DROP ~]# iptables -A INPUT -s 172.16.0.0/12 -j DROP ~]# iptables -A INPUT -s 192.168.0.0/16 -j DROP", "~]# ipset create my-block-set hash:net ~]# ipset add my-block-set 10.0.0.0/8 ~]# ipset add my-block-set 172.16.0.0/12 ~]# ipset add my-block-set 192.168.0.0/16", "~]# iptables -A INPUT -m set --set my-block-set src -j DROP" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Setting_and_Controlling_IP_sets_using_iptables
Chapter 10. Monitoring Cluster Metrics
Chapter 10. Monitoring Cluster Metrics 10.1. Cluster Level Dashboard This is the default dashboard of the Monitoring interface that shows the overview of the selected cluster. 10.1.1. Monitoring and Viewing Cluster Health To monitor the Cluster health status and the metrics associated with it, view the panels in the Cluster Dashboard. For detailed panel descriptions and health indicators, see Table 7.1. Cluster Health Panel Descriptions . 10.1.1.1. Health and Snapshots The Health panel displays the overall health of the selected cluster and the Snapshots panel shows the active number of snapshots. 10.1.1.2. Hosts, Volumes and Bricks The Hosts, Volumes, and Bricks panels displays status information. The following is an example screen displaying the respective status information. Hosts : In total, there are 3 online Hosts Volumes : In total, there are 9 Volumes Bricks : In total, there are 44 Bricks 10.1.1.3. Geo-Replication Session The Geo-Replication Session panel displays geo-replication session information from a given cluster, including the total number of geo-replication session and a count of geo-replication sessions by status. 10.1.1.4. Health Panel Descriptions The following table lists the Panels and the descriptions. Table 10.1. Cluster Health Panel Descriptions Panel Description Health Indicator Health The Health panel displays the overall health of the selected cluster, which is either Healthy or Unhealthy Green: Healthy Red: Unhealthy Orange: Degraded Snapshots The Snapshots panel displays the count of the active snapshots Hosts The Hosts panel displays host status information including the total number of hosts and a count of hosts by status Volume The Volumes panel displays volume status information for the selected cluster, including the total number of volumes and a count of volumes by status Bricks The Bricks panel displays brick status information for the selected cluster, including the total number of bricks in the cluster, and a count of bricks by status Geo-Replication Session The Geo-Replication Session panel displays geo-replication session information from a given cluster, including the total number of geo-replication session and a count of geo-replication sessions by status 10.1.2. Monitoring and Viewing Cluster Performance Cluster performance metrics can be monitored by the data displayed in the following panels. Connection Trend The Connection Trend panel displays the total number of client connections to bricks in the volumes for the selected cluster over a period of time. Typical statistics may look like this: IOPS The IOPS panel displays IOPS for the selected cluster over a period of time. IOPS is based on the aggregated brick level read and write operations collected using gluster volume profile info. Capacity Utilization and Capacity Available The Capacity Utilization panel displays the capacity utilized across all volumes for the selected cluster. The Capacity Available panel displays the available capacity across all volumes for the selected cluster. Weekly Growth Rate The Weekly Growth Rate panel displays the forecasted weekly growth rate for capacity utilization computed based on daily capacity utilization. Weeks Remaining The Weeks Remaining panel displays the estimated time remaining in weeks till volumes reach full capacity based on the forecasted Weekly Growth Rate. Throughput Trend The Throughput Trend panel displays the network throughput for the selected cluster over a period of time. 10.1.3. Top Consumers The Top Consumers panels displays the highest capacity utilization by the cluster resources. To view the top consumers of the cluster: In the Cluster level dashboard, at the bottom, click Top Consumers to expand the menu. Top 5 Utilization By Bricks The Top 5 Utilization By Bricks panel displays the bricks with the highest capacity utilization. Top 5 Utilization by Volume The Top 5 Utilization By Volumes panel displays the volumes with the highest capacity utilization. CPU Utilization by Host The CPU Utilization by Host panel displays the CPU utilization of each node in the cluster. Memory Utilization By Host The Memory Utilization by Hosts panel displays memory utilization of each node in the cluster. Ping Latency Trend The Ping Latency Trend panel displays the ping latency for each host in a given cluster. 10.1.4. Monitoring and Viewing Cluster Status To view the status of the overall cluster: In the Cluster level dashboard, at the bottom, click Status to expand the menu. The Volume, Host, and Brick status are displayed in the panels. Volume Status The Volume Status panel displays the status code of each volume for the selected cluster. The volume status is displayed in numerals and colors. The following are the corresponding status of the numerals. 0 = Up 3 = Up (Degraded) 4 = Up (Partial) 5 = Unknown 8 = Down Host Status The Host Status panel displays the status code of each host for the selected cluster. The Host status is displayed in numeric codes: 0 = Up 8 = Down Brick Status The Brick Status panel displays the status code of each brick for the selected cluster. The Brick status is displayed in numeric codes: 1 = Started 10 = Stopped 10.2. Host Level Dashboard 10.2.1. Monitoring and Viewing Health and Status To monitor the Cluster Hosts status and the metrics associated with it, navigate to the Hosts Level Dashboard and view the panels. Health The Health panel displays the overall health for a given host. Bricks and Bricks Status The Bricks panel displays brick status information for a given host, including the total number of bricks in the host, and a count of bricks by status. The Brick Status panel displays the status code of each brick for a given host. 1 = Started 10 = Stopped 10.2.2. Monitoring and Viewing Performance 10.2.2.1. Memory and CPU Utilization Memory Available The Memory Available panel displays the sum of memory free and memory cached. Memory Utilization The Memory Utilization panel displays memory utilization percentage for a given host that includes buffers and caches used by the kernel over a period of time. Buffered : Amount of memory used for buffering, mostly for I/O operations Cached : Memory used for caching disk data for reads, memory-mapped files or tmpfs data Slab Rec : Amount of reclaimable memory used for slab kernel allocations Slab Unrecl : Amount of unreclaimable memory used for slab kernel allocations Used : Amount of memory used, calculated as Total - Free (Unused Memory) - Buffered - Cache Total : Total memory used Swap Free The Swap Free panel displays the available swap space in percent for a given host. Swap Utilization The Swap Utilization panel displays the used swap space in percent for a given host. CPU Utilization The CPU utilization panel displays the CPU utilization for a given host over a period of time. IOPS The IOPS panel displays IOPS for a given host over a period of time. IOPS is based on the aggregated brick level read and write operations. 10.2.2.2. Capacity and Disk Load Total Brick Capacity Utilization Trend The Total Brick Capacity Utilization Trend panel displays the capacity utilization for all bricks on a given for a period of time. Total Brick Capacity Utilization The Total Brick Capacity Utilization panel displays the current percent capacity utilization for a given host. Total Brick Capacity Available The Total Brick Capacity Available panel displays the current available capacity for a given host. Weekly Growth Rate The Weekly Growth Rate panel displays the forecasted weekly growth rate for capacity utilization computed based on daily capacity utilization. Weeks Remaining The Weeks Remaining panel displays the estimated time remaining in weeks till host capacity reaches full capacity based on the forecasted Weekly Growth Rate. Brick Utilization The Brick Utilization panel displays the utilization of each brick for a given host. Brick Capacity The Brick Capacity panel displays the total capacity of each brick for a given host. Brick Capacity Used The Brick Capacity Used panel displays the used capacity of each brick for a given host. Disk Load The Disk Load panel shows the host's aggregated read and writes from/to disks over a period of time. Disk Operation The Disk Operations panel shows the host's aggregated read and writes disk operations over a period of time. Disk IO The Disk IO panel shows the host's aggregated I/O time over a period of time. 10.2.2.3. Network Throughput The Throughput panel displays the network throughput for a given host over a period of time. Dropped Packets Per Second The Dropped Packets Per Second panel displays dropped network packets for the host over a period of time. Typically, dropped packets indicates network congestion, for example, the queue on the switch port your host is connected to is full and packets are dropped because it cannot transmit data fast enough. Errors Per Second The Errors Per Second panel displays network errors for a given host over a period of time. Typically, the errors indicate issues that occurred while transmitting packets due to carrier errors (duplex mismatch, faulty cable), fifo errors, heartbeat errors, and window errors, CRC errors too short frames, and/or too long frames. In short, errors typically result from faulty hardware, and/or speed mismatch. 10.2.3. Host Dashboard Metric Units The following table shows the metrics and their corresponding measurement units. Table 10.2. Host Dashboard Metric Units Metrics Units Memory Available Megabyte/Gigabyte/Terabyte Memory Utilization Percentage % Swap free Percentage % Swap Utilization Percentage % CPU Utilization Percentage % Total Brick Capacity Utilization Percentage % Total Brick Capacity MB/GB/TB Weekly Growth Rate MB/GB/TB Disk Load kbps Disk IO millisecond ms Network Throughput kbps 10.3. Volume Level Dashboard The Volume view dashboard allows the Gluster Administrator to: View at-a-glance information about the Gluster volume that includes health and status information, key performance indicators such as IOPS, throughput, etc, and alerts that can highlight attention to potential issues in the volume, brick, and disk. Compare 1 or more metrics such as IOPS, CPU, Memory, Network Load across bricks within the volume. Compare utilization such as IOPS, capacity, etc, across bricks within a volume. View performance metrics by brick (within a volume) to address diagnosing of failure, rebuild, degradation, and poor performance on one brick. When all the Gluster storage nodes are shut down or offline, Time to live (TTL) will delete the volume details from etcd as per the TTL value measured in seconds. The TTL value for volumes is set based on the number of volumes and bricks in the system. The formula to calculate the TTL value to delete volume details is: Time to Live (seconds) = synchronization interval (60 seconds) + number of volumes * 20 + number of bricks * 10 + 160 . In Web Administration environment Cluster will show status as unhealthy and all hosts will be marked as down No display of Volumes and Bricks The Events view will reflect the relevant status In Grafana Dashboard In Cluster level Dashboard, the Host, Volumes, and Bricks panels reflects the relevant updated counts with status. In Cluster, Volume, and Brick level dashboards, some panels will be marked as N/A, indicating no data is available. 10.3.1. Monitoring and Viewing Health Health The Health panel displays the overall health for a given volume. Snapshots The Snapshots panel displays the count of active snapshots for the selected cluster. Brick Status The Brick Status panel displays the status code of each brick for a given volume. 1 = Started 10 = Stopped Bricks The Bricks panel displays brick status information for a given volume, including the total number of bricks in the volume, and a count of bricks by status. Subvolumes The Subvolumes panel displays subvolume status information for a given volume. Geo-Replication Sessions The Geo-Replication Session panel displays geo-replication session information from a given volumes, including the total number of geo-replication session and a count of geo-replication sessions by status. Rebalance The Rebalance panel displays rebalance progress information for a given volume, which is applicable when rebalancing is underway. Rebalance Status: The Rebalance Status panel displays the status of rebalancing for a given volume, which is applicable when rebalancing is underway. 10.3.2. Monitoring and Viewing Performance Capacity Utilization The Capacity Utilization panel displays the used capacity for a given volume. Capacity Available The Capacity Available panel displays the available capacity for a given volume. Weekly Growth Rate The Weekly Growth Rate panel displays the forecasted weekly growth rate for capacity utilization computed based on daily capacity utilization. Weeks Remaining The Weeks Remaining panel displays the estimated time remaining in weeks till volume reaches full capacity based on the forecasted Weekly Growth Rate. Capacity Utilization Trend The Capacity Utilization Trend panel displays the volume capacity utilization over a period of time. Inode Utilization The Inode Utilization panel displays inodes used for bricks in the volume over a period of time. Inode Available The Inode Available panel displays inodes free for bricks in the volume. Throughput The Throughput panel displays volume throughput based on brick-level read and write operations fetched using gluster volume profile . LVM Thin Pool Metadata % The LVM Thin Pool Metadata % panel displays the utilization of LVM thin pool metadata for a given volume. Monitoring the utilization of LVM thin pool metadata and data usage is important to ensure they do not run out of space. If the data space is exhausted, I/O operations are either queued or failing based on the configuration. If metadata space is exhausted, you will observe error I/O's until the LVM pool is taken offline and repair is performed to fix potential inconsistencies. Moreover, due to the metadata transaction being aborted and the pool doing caching there might be uncommitted (to disk) I/O operations that were acknowledged to the upper storage layers (file system) so those layers will need to have checks/repairs performed as well. LVM Thin Pool Data Usage % The LVM Thin Pool Data Usage % panel displays the LVM thin pool data usage for a given volume. Monitoring the utilization of LVM thin pool metadata and data usage is important to ensure they do not run out of space. If the data space is exhausted , I/O operations are either queued or failing based on the configuration. If metadata space is exhausted, you will observe error I/O's until the LVM pool is taken offline and repair is performed to fix potential inconsistencies. Moreover, due to the metadata transaction being aborted and the pool doing caching there might be uncommitted (to disk) I/O operations that were acknowledged to the upper storage layers (file system) so those layers will need to have checks/repairs performed as well. 10.3.3. Monitoring File Operations Top File Operations The Top File Operations panel displays the top 5 FOP (file operations) with the highest % latency, wherein the % latency is the fraction of the FOP response time that is consumed by the FOP. File Operations for Locks Trend The File Operations for Locks Trend panel displays the average latency, maximum latency, call rate for each FOP for Locks over a period of time. File Operations for Read/Write The File Operations for Read/Write panel displays the average latency, maximum latency, call rate for each FOP for Read/Write Operations over a period of time. File Operations for Inode Operations The File Operations for Inode Operations panel displays the average latency, maximum latency, call rate for each FOP for Inode Operations over a period of time. File Operations for Entry Operations The File Operations for Entry Operations panel displays the average latency, maximum latency, call rate for each FOP for Entry Operations over a period of time. 10.3.4. Volume Dashboard Metric Units The following table shows the metrics and their corresponding measurement units. Table 10.3. Volume Dashboard Metric Units Metrics Units Capacity Utilization Percentage % Capacity Available Megabyte/Gigabyte/Terabyte Weekly Growth Rate Megabyte/Gigabyte/Terabyte Capacity Utilization Trend Percentage % Inode Utilization Percentage % Lvm Thin Pool Metadata Percentage % Lvm Thin Pool Data Usage Percentage % File Operations for Locks Trend MB/GB/TB File Operations for Read/Write K File Operations for Inode Operation Trend K File Operations for Entry Operations K 10.4. Brick Level Dashboard 10.4.1. Monitoring and Viewing Brick Status The Status panel displays the status for a given brick. 10.4.2. Monitoring and Viewing Brick Performance Capacity Utilization The Capacity Utilization panel displays the percentage of capacity utilization for a given brick. Capacity Available The Capacity Available panel displays the available capacity for a given volume. Capacity Utilization Trend The Capacity Utilization Trend panel displays the brick capacity utilization over a period of time. Weekly Growth Rate The Weekly Growth Rate panel displays the forecasted weekly growth rate for capacity utilization computed based on daily capacity utilization. Weeks Remaining The Weeks Remaining panel displays the estimated time remaining in weeks till brick reaches full capacity based on the forecasted Weekly Growth Rate. Healing The Healing panel displays healing information for a given volume based on healinfo. Note The Healing panel will not show any data for volumes without replica. IOPS The IOPS panel displays IOPS for a brick over a period of time. IOPS is based on brick level read and write operations. LVM Thin Pool Metadata % The LVM Thin Pool Metadata % panel displays the utilization of LVM thin pool metadata for a given brick. Monitoring the utilization of LVM thin pool metadata and data usage is important to ensure they don't run out of space. If the data space is exhausted , I/O operations are either queued or failing based on the configuration. If metadata space is exhausted, you will observe error I/O's until the LVM pool is taken offline and repair is performed to fix potential inconsistencies. Moreover, due to the metadata transaction being aborted and the pool doing caching there might be uncommitted (to disk) I/O operations that were acknowledged to the upper storage layers (file system) so those layers will need to have checks/repairs performed as well. LVM Thin Pool Data Usage % The LVM Thin Pool Data Usage % panel displays the LVM thin pool data usage for a given brick. Monitoring the utilization of LVM thin pool metadata and data usage is important to ensure they don't run out of space. If the data space is exhausted , I/O operations are either queued or failing based on the configuration. If metadata space is exhausted, you will observe error I/O's until the LVM pool is taken offline and repair is performed to fix potential inconsistencies. Moreover, due to the metadata transaction being aborted and the pool doing caching there might be uncommitted (to disk) I/O operations that were acknowledged to the upper storage layers (file system) so those layers will need to have repairs performed as well. Throughput The Throughput panel displays brick-level read and write operations fetched using "gluster volume profile." Latency The Latency panel displays latency for a brick over a period of time. Latency is based on the average amount of time a brick spends doing a read or write operation. 10.4.3. Brick Dashboard Metric Units The following table shows the metrics and their corresponding measurement units. Table 10.4. Brick Dashboard Metric Units Metrics Units Capacity Utilization Percentage % Capacity Available Megabyte/Gigabyte/Terabyte Weekly Growth Rate Megabyte/Gigabyte/Terabyte Capacity Utilization Trend Percentage % Inode Utilization Percentage % Lvm Thin Pool Metadata Percentage % Lvm Thin Pool Data Usage Percentage % Disk Throughput Percentage %
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/monitoring_guide/monitoring_cluster_metrics
function::user_int32
function::user_int32 Name function::user_int32 - Retrieves a 32-bit integer value stored in user space Synopsis Arguments addr the user space address to retrieve the 32-bit integer from Description Returns the 32-bit integer value from a given user space address. Returns zero when user space data is not accessible.
[ "user_int32:long(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-int32
Chapter 1. Implementing consistent network interface naming
Chapter 1. Implementing consistent network interface naming The udev device manager implements consistent device naming in Red Hat Enterprise Linux. The device manager supports different naming schemes and, by default, assigns fixed names based on firmware, topology, and location information. Without consistent device naming, the Linux kernel assigns names to network interfaces by combining a fixed prefix and an index. The index increases as the kernel initializes the network devices. For example, eth0 represents the first Ethernet device being probed on start-up. If you add another network interface controller to the system, the assignment of the kernel device names is no longer fixed because, after a reboot, the devices can initialize in a different order. In that case, the kernel can name the devices differently. To solve this problem, udev assigns consistent device names. This has the following advantages: Device names are stable across reboots. Device names stay fixed even if you add or remove hardware. Defective hardware can be seamlessly replaced. The network naming is stateless and does not require explicit configuration files. Warning Generally, Red Hat does not support systems where consistent device naming is disabled. For exceptions, see the Red Hat Knowledgebase solution Is it safe to set net.ifnames=0 . 1.1. How the udev device manager renames network interfaces To implement a consistent naming scheme for network interfaces, the udev device manager processes the following rule files in the listed order: Optional: /usr/lib/udev/rules.d/60-net.rules This file exists only if you install the initscripts-rename-device package. The /usr/lib/udev/rules.d/60-net.rules file defines that the deprecated /usr/lib/udev/rename_device helper utility searches for the HWADDR parameter in /etc/sysconfig/network-scripts/ifcfg-* files. If the value set in the variable matches the MAC address of an interface, the helper utility renames the interface to the name set in the DEVICE parameter of the ifcfg file. If the system uses only NetworkManager connection profiles in keyfile format, udev skips this step. Only on Dell systems: /usr/lib/udev/rules.d/71-biosdevname.rules This file exists only if the biosdevname package is installed, and the rules file defines that the biosdevname utility renames the interface according to its naming policy, if it was not renamed in the step. Note Install and use biosdevname only on Dell systems. /usr/lib/udev/rules.d/75-net-description.rules This file defines how udev examines the network interface and sets the properties in udev -internal variables. These variables are then processed in the step by the /usr/lib/udev/rules.d/80-net-setup-link.rules file. Some of the properties can be undefined. /usr/lib/udev/rules.d/80-net-setup-link.rules This file calls the net_setup_link builtin of the udev service, and udev renames the interface based on the order of the policies in the NamePolicy parameter in the /usr/lib/systemd/network/99-default.link file. For further details, see Network interface naming policies . If none of the policies applies, udev does not rename the interface. Additional resources Why are systemd network interface names different between major RHEL versions (Red Hat Knowledgebase) 1.2. Network interface naming policies By default, the udev device manager uses the /usr/lib/systemd/network/99-default.link file to determine which device naming policies to apply when it renames interfaces. The NamePolicy parameter in this file defines which policies udev uses and in which order: The following table describes the different actions of udev based on which policy matches first as specified by the NamePolicy parameter: Policy Description Example name keep If the device already has a name that was assigned in the user space, udev does not rename this device. For example, this is the case if the name was assigned during device creation or by a rename operation. kernel If the kernel indicates that a device name is predictable, udev does not rename this device. lo database This policy assigns names based on mappings in the udev hardware database. For details, see the hwdb(7) man page on your system. idrac onboard Device names incorporate firmware or BIOS-provided index numbers for onboard devices. eno1 slot Device names incorporate firmware or BIOS-provided PCI Express (PCIe) hot-plug slot-index numbers. ens1 path Device names incorporate the physical location of the connector of the hardware. enp1s0 mac Device names incorporate the MAC address. By default, Red Hat Enterprise Linux does not use this policy, but administrators can enable it. enx525400d5e0fb Additional resources How the udev device manager renames network interfaces systemd.link(5) man page on your system 1.3. Network interface naming schemes The udev device manager uses certain stable interface attributes that device drivers provide to generate consistent device names. If a new udev version changes how the service creates names for certain interfaces, Red Hat adds a new scheme version and documents the details in the systemd.net-naming-scheme(7) man page on your system. By default, Red Hat Enterprise Linux (RHEL) 9 uses the rhel-9.0 naming scheme, even if you install or update to a later minor version of RHEL. To prevent new drivers from providing more or other attributes for a network interface, the rhel-net-naming-sysattrs package provides the /usr/lib/udev/hwdb.d/50-net-naming-sysattr-allowlist.hwdb database. This database defines which sysfs values the udev service can use to create network interface names. The entries in the database are also versioned and influenced by the scheme version. Note On RHEL 9.4 and later, you can also use all rhel-8.* naming schemes. If you want to use a scheme other than the default, you can switch the network interface naming scheme . For further details about the naming schemes for different device types and platforms, see the systemd.net-naming-scheme(7) man page on your system. 1.4. Switching to a different network interface naming scheme By default, Red Hat Enterprise Linux (RHEL) 9 uses the rhel-9.0 naming scheme, even if you install or update to a later minor version of RHEL. While the default naming scheme fits in most scenarios, there might be reasons to switch to a different scheme version, for example: A new scheme can help to better identify a device if it adds additional attributes, such as a slot number, to an interface name. An new scheme can prevent udev from falling back to the kernel-assigned device names ( eth* ). This happens if the driver does not provide enough unique attributes for two or more interfaces to generate unique names for them. Prerequisites You have access to the console of the server. Procedure List the network interfaces: Record the MAC addresses of the interfaces. Optional: Display the ID_NET_NAMING_SCHEME property of a network interface to identify the naming scheme that RHEL currently uses: Note that the property is not available on the lo loopback device. Append the net.naming-scheme= <scheme> option to the command line of all installed kernels, for example: Reboot the system. Based on the MAC addresses you recorded, identify the new names of network interfaces that have changed due to the different naming scheme: After switching the scheme, udev names in this example the device with MAC address 00:00:5e:00:53:1a eno1np0 , whereas it was named eno1 before. Identify which NetworkManager connection profile uses an interface with the name: Set the connection.interface-name property in the connection profile to the new interface name: Reactivate the connection profile: Verification Identify the naming scheme that RHEL now uses by displaying the ID_NET_NAMING_SCHEME property of a network interface: Additional resources Network interface naming schemes 1.5. Customizing the prefix for Ethernet interfaces during installation If you do not want to use the default device-naming policy for Ethernet interfaces, you can set a custom device prefix during the Red Hat Enterprise Linux (RHEL) installation. Important Red Hat supports systems with customized Ethernet prefixes only if you set the prefix during the RHEL installation. Using the prefixdevname utility on already deployed systems is not supported. If you set a device prefix during the installation, the udev service uses the <prefix><index> format for Ethernet interfaces after the installation. For example, if you set the prefix net , the service assigns the names net0 , net1 , and so on to the Ethernet interfaces. The udev service appends the index to the custom prefix, and preserves the index values of known Ethernet interfaces. If you add an interface, udev assigns an index value that is one greater than the previously-assigned index value to the new interface. Prerequisites The prefix consists of ASCII characters. The prefix is an alphanumeric string. The prefix is shorter than 16 characters. The prefix does not conflict with any other well-known network interface prefix, such as eth , eno , ens , and em . Procedure Boot the Red Hat Enterprise Linux installation media. In the boot manager, follow these steps: Select the Install Red Hat Enterprise Linux <version> entry. Press Tab to edit the entry. Append net.ifnames.prefix= <prefix> to the kernel options. Press Enter to start the installation program. Install Red Hat Enterprise Linux. Verification To verify the interface names, display the network interfaces: Additional resources Interactively installing RHEL from installation media 1.6. Configuring user-defined network interface names by using udev rules You can use udev rules to implement custom network interface names that reflect your organization's requirements. Procedure Identify the network interface that you want to rename: Record the MAC address of the interface. Display the device type ID of the interface: Create the /etc/udev/rules.d/70-persistent-net.rules file, and add a rule for each interface that you want to rename: Important Use only 70-persistent-net.rules as a file name if you require consistent device names during the boot process. The dracut utility adds a file with this name to the initrd image if you regenerate the RAM disk image. For example, use the following rule to rename the interface with MAC address 00:00:5e:00:53:1a to provider0 : Optional: Regenerate the initrd RAM disk image: You require this step only if you need networking capabilities in the RAM disk. For example, this is the case if the root file system is stored on a network device, such as iSCSI. Identify which NetworkManager connection profile uses the interface that you want to rename: Unset the connection.interface-name property in the connection profile: Temporarily, configure the connection profile to match both the new and the interface name: Reboot the system: Verify that the device with the MAC address that you specified in the link file has been renamed to provider0 : Configure the connection profile to match only the new interface name: You have now removed the old interface name from the connection profile. Reactivate the connection profile: Additional resources udev(7) man page on your system 1.7. Configuring user-defined network interface names by using systemd link files You can use systemd link files to implement custom network interface names that reflect your organization's requirements. Prerequisites You must meet one of these conditions: NetworkManager does not manage this interface, or the corresponding connection profile uses the keyfile format . Procedure Identify the network interface that you want to rename: Record the MAC address of the interface. If it does not already exist, create the /etc/systemd/network/ directory: For each interface that you want to rename, create a 70-*.link file in the /etc/systemd/network/ directory with the following content: Important Use a file name with a 70- prefix to keep the file names consistent with the udev rules-based solution. For example, create the /etc/systemd/network/70-provider0.link file with the following content to rename the interface with MAC address 00:00:5e:00:53:1a to provider0 : Optional: Regenerate the initrd RAM disk image: You require this step only if you need networking capabilities in the RAM disk. For example, this is the case if the root file system is stored on a network device, such as iSCSI. Identify which NetworkManager connection profile uses the interface that you want to rename: Unset the connection.interface-name property in the connection profile: Temporarily, configure the connection profile to match both the new and the interface name: Reboot the system: Verify that the device with the MAC address that you specified in the link file has been renamed to provider0 : Configure the connection profile to match only the new interface name: You have now removed the old interface name from the connection profile. Reactivate the connection profile. Additional resources systemd.link(5) man page on your system 1.8. Assigning alternative names to a network interface by using systemd link files With alternative interface naming, the kernel can assign additional names to network interfaces. You can use these alternative names in the same way as the normal interface names in commands that require a network interface name. Prerequisites You must use ASCII characters for the alternative name. The alternative name must be shorter than 128 characters. Procedure Display the network interface names and their MAC addresses: Record the MAC address of the interface to which you want to assign an alternative name. If it does not already exist, create the /etc/systemd/network/ directory: For each interface that must have an alternative name, create a copy of the /usr/lib/systemd/network/99-default.link file with a unique name and .link suffix in the /etc/systemd/network/ directory, for example: Modify the file you created in the step. Rewrite the [Match] section as follows, and append the AlternativeName entries to the [Link] section: For example, create the /etc/systemd/network/70-altname.link file with the following content to assign provider as an alternative name to the interface with MAC address 00:00:5e:00:53:1a : Regenerate the initrd RAM disk image: Reboot the system: Verification Use the alternative interface name. For example, display the IP address settings of the device with the alternative name provider : Additional resources What is AlternativeNamesPolicy in Interface naming scheme? (Red Hat Knowledgebase)
[ "NamePolicy=keep kernel database onboard slot path", "ip link show 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "udevadm info --query=property --property=ID_NET_NAMING_SCHEME /sys/class/net/eno1' ID_NET_NAMING_SCHEME=rhel-9.0", "grubby --update-kernel=ALL --args=net.naming-scheme= rhel-9.4", "reboot", "ip link show 2: eno1np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli -f device,name connection show DEVICE NAME eno1 example_profile", "nmcli connection modify example_profile connection.interface-name \"eno1np0\"", "nmcli connection up example_profile", "udevadm info --query=property --property=ID_NET_NAMING_SCHEME /sys/class/net/eno1np0' ID_NET_NAMING_SCHEME=_rhel-9.4", "ip link show 2: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "cat /sys/class/net/enp1s0/type 1", "SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\" <MAC_address> \",ATTR{type}==\" <device_type_id> \",NAME=\" <new_interface_name> \"", "SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\" 00:00:5e:00:53:1a \",ATTR{type}==\" 1 \",NAME=\" provider0 \"", "dracut -f", "nmcli -f device,name connection show DEVICE NAME enp1s0 example_profile", "nmcli connection modify example_profile connection.interface-name \"\"", "nmcli connection modify example_profile match.interface-name \"provider0 enp1s0\"", "reboot", "ip link show provider0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli connection modify example_profile match.interface-name \"provider0\"", "nmcli connection up example_profile", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "mkdir -p /etc/systemd/network/", "[Match] MACAddress= <MAC_address> [Link] Name= <new_interface_name>", "[Match] MACAddress=00:00:5e:00:53:1a [Link] Name=provider0", "dracut -f", "nmcli -f device,name connection show DEVICE NAME enp1s0 example_profile", "nmcli connection modify example_profile connection.interface-name \"\"", "nmcli connection modify example_profile match.interface-name \"provider0 enp1s0\"", "reboot", "ip link show provider0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli connection modify example_profile match.interface-name \"provider0\"", "nmcli connection up example_profile", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "mkdir -p /etc/systemd/network/", "cp /usr/lib/systemd/network/99-default.link /etc/systemd/network/98-lan.link", "[Match] MACAddress= <MAC_address> [Link] AlternativeName= <alternative_interface_name_1> AlternativeName= <alternative_interface_name_2> AlternativeName= <alternative_interface_name_n>", "[Match] MACAddress=00:00:5e:00:53:1a [Link] NamePolicy=keep kernel database onboard slot path AlternativeNamesPolicy=database onboard slot path MACAddressPolicy=none AlternativeName=provider", "dracut -f", "reboot", "ip address show provider 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff altname provider" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/consistent-network-interface-device-naming_configuring-and-managing-networking
2.6. DeviceKit-power and devkit-power
2.6. DeviceKit-power and devkit-power In Red Hat Enterprise Linux 6 DeviceKit-power assumes the power management functions that were part of HAL and some of the functions that were part of GNOME Power Manager in releases of Red Hat Enterprise Linux (refer also to Section 2.7, "GNOME Power Manager" ). DeviceKit-power provides a daemon, an API, and a set of command-line tools. Each power source on the system is represented as a device, whether it is a physical device or not. For example, a laptop battery and an AC power source are both represented as devices. You can access the command-line tools with the devkit-power command and the following options: --enumerate , -e displays an object path for each power devices on the system. Example 2.6. Sample Output of Object Paths --dump , -d displays the parameters for all power devices on the system. --wakeups , -w displays the CPU wakeups on the system. --monitor , -m monitors the system for changes to power devices, for example, the connection or disconnection of a source of AC power, or the depletion of a battery. Press Ctrl + C to stop monitoring the system. --monitor-detail monitors the system for changes to power devices, for example, the connection or disconnection of a source of AC power, or the depletion of a battery. The --monitor-detail option presents more detail than the --monitor option. Press Ctrl + C to stop monitoring the system. --show-info object_path , -i object_path displays all information available for a particular object path. Example 2.7. Using the -i option To obtain information about a battery on your system represented by the object path /org/freedesktop/UPower/DeviceKit/power/battery_BAT0 , run:
[ "devkit-power -e USD /org/freedesktop/DeviceKit/power/devices/line_power_AC USD /org/freedesktop/UPower/DeviceKit/power/battery_BAT0", "devkit-power -i /org/freedesktop/UPower/DeviceKit/power/battery_BAT0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/devicekit-power
Chapter 4. Important changes to OpenShift Jenkins images
Chapter 4. Important changes to OpenShift Jenkins images OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io . It also removes the OpenShift Jenkins Maven and NodeJS Agent images from its payload: OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . OpenShift Container Platform 4.10 deprecated the OpenShift Jenkins Maven and NodeJS Agent images. OpenShift Container Platform 4.11 removes these images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . These changes support the OpenShift Container Platform 4.10 recommendation to use multiple container Pod Templates with the Jenkins Kubernetes Plugin . 4.1. Relocation of OpenShift Jenkins images OpenShift Container Platform 4.11 makes significant changes to the location and availability of specific OpenShift Jenkins images. Additionally, you can configure when and how to update these images. What stays the same with the OpenShift Jenkins images? The Cluster Samples Operator manages the ImageStream and Template objects for operating the OpenShift Jenkins images. By default, the Jenkins DeploymentConfig object from the Jenkins pod template triggers a redeployment when the Jenkins image changes. By default, this image is referenced by the jenkins:2 image stream tag of Jenkins image stream in the openshift namespace in the ImageStream YAML file in the Samples Operator payload. If you upgrade from OpenShift Container Platform 4.10 and earlier to 4.11, the deprecated maven and nodejs pod templates are still in the default image configuration. If you upgrade from OpenShift Container Platform 4.10 and earlier to 4.11, the jenkins-agent-maven and jenkins-agent-nodejs image streams still exist in your cluster. To maintain these image streams, see the following section, "What happens with the jenkins-agent-maven and jenkins-agent-nodejs image streams in the openshift namespace?" What changes in the support matrix of the OpenShift Jenkins image? Each new image in the ocp-tools-4 repository in the registry.redhat.io registry supports multiple versions of OpenShift Container Platform. When Red Hat updates one of these new images, it is simultaneously available for all versions. This availability is ideal when Red Hat updates an image in response to a security advisory. Initially, this change applies to OpenShift Container Platform 4.11 and later. It is planned that this change will eventually apply to OpenShift Container Platform 4.9 and later. Previously, each Jenkins image supported only one version of OpenShift Container Platform and Red Hat might update those images sequentially over time. What additions are there with the OpenShift Jenkins and Jenkins Agent Base ImageStream and ImageStreamTag objects? By moving from an in-payload image stream to an image stream that references non-payload images, OpenShift Container Platform can define additional image stream tags. Red Hat has created a series of new image stream tags to go along with the existing "value": "jenkins:2" and "value": "image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest" image stream tags present in OpenShift Container Platform 4.10 and earlier. These new image stream tags address some requests to improve how the Jenkins-related image streams are maintained. About the new image stream tags: ocp-upgrade-redeploy To update your Jenkins image when you upgrade OpenShift Container Platform, use this image stream tag in your Jenkins deployment configuration. This image stream tag corresponds to the existing 2 image stream tag of the jenkins image stream and the latest image stream tag of the jenkins-agent-base-rhel8 image stream. It employs an image tag specific to only one SHA or image digest. When the ocp-tools-4 image changes, such as for Jenkins security advisories, Red Hat Engineering updates the Cluster Samples Operator payload. user-maintained-upgrade-redeploy To manually redeploy Jenkins after you upgrade OpenShift Container Platform, use this image stream tag in your Jenkins deployment configuration. This image stream tag uses the least specific image version indicator available. When you redeploy Jenkins, run the following command: USD oc import-image jenkins:user-maintained-upgrade-redeploy -n openshift . When you issue this command, the OpenShift Container Platform ImageStream controller accesses the registry.redhat.io image registry and stores any updated images in the OpenShift image registry's slot for that Jenkins ImageStreamTag object. Otherwise, if you do not run this command, your Jenkins deployment configuration does not trigger a redeployment. scheduled-upgrade-redeploy To automatically redeploy the latest version of the Jenkins image when it is released, use this image stream tag in your Jenkins deployment configuration. This image stream tag uses the periodic importing of image stream tags feature of the OpenShift Container Platform image stream controller, which checks for changes in the backing image. If the image changes, for example, due to a recent Jenkins security advisory, OpenShift Container Platform triggers a redeployment of your Jenkins deployment configuration. See "Configuring periodic importing of image stream tags" in the following "Additional resources." What happens with the jenkins-agent-maven and jenkins-agent-nodejs image streams in the openshift namespace? The OpenShift Jenkins Maven and NodeJS Agent images for OpenShift Container Platform were deprecated in 4.10, and are removed from the OpenShift Container Platform install payload in 4.11. They do not have alternatives defined in the ocp-tools-4 repository. However, you can work around this by using the sidecar pattern described in the "Jenkins agent" topic mentioned in the following "Additional resources" section. However, the Cluster Samples Operator does not delete the jenkins-agent-maven and jenkins-agent-nodejs image streams created by prior releases, which point to the tags of the respective OpenShift Container Platform payload images on registry.redhat.io . Therefore, you can pull updates to these images by running the following commands: USD oc import-image jenkins-agent-nodejs -n openshift USD oc import-image jenkins-agent-maven -n openshift 4.2. Customizing the Jenkins image stream tag To override the default upgrade behavior and control how the Jenkins image is upgraded, you set the image stream tag value that your Jenkins deployment configurations use. The default upgrade behavior is the behavior that existed when the Jenkins image was part of the install payload. The image stream tag names, 2 and ocp-upgrade-redeploy , in the jenkins-rhel.json image stream file use SHA-specific image references. Therefore, when those tags are updated with a new SHA, the OpenShift Container Platform image change controller automatically redeploys the Jenkins deployment configuration from the associated templates, such as jenkins-ephemeral.json or jenkins-persistent.json . For new deployments, to override that default value, you change the value of the JENKINS_IMAGE_STREAM_TAG in the jenkins-ephemeral.json Jenkins template. For example, replace the 2 in "value": "jenkins:2" with one of the following image stream tags: ocp-upgrade-redeploy , the default value, updates your Jenkins image when you upgrade OpenShift Container Platform. user-maintained-upgrade-redeploy requires you to manually redeploy Jenkins by running USD oc import-image jenkins:user-maintained-upgrade-redeploy -n openshift after upgrading OpenShift Container Platform. scheduled-upgrade-redeploy periodically checks the given <image>:<tag> combination for changes and upgrades the image when it changes. The image change controller pulls the changed image and redeploys the Jenkins deployment configuration provisioned by the templates. For more information about this scheduled import policy, see the "Adding tags to image streams" in the following "Additional resources." Note To override the current upgrade value for existing deployments, change the values of the environment variables that correspond to those template parameters. Prerequisites You are running OpenShift Jenkins on OpenShift Container Platform 4.16. You know the namespace where OpenShift Jenkins is deployed. Procedure Set the image stream tag value, replacing <namespace> with namespace where OpenShift Jenkins is deployed and <image_stream_tag> with an image stream tag: Example USD oc patch dc jenkins -p '{"spec":{"triggers":[{"type":"ImageChange","imageChangeParams":{"automatic":true,"containerNames":["jenkins"],"from":{"kind":"ImageStreamTag","namespace":"<namespace>","name":"jenkins:<image_stream_tag>"}}}]}}' Tip Alternatively, to edit the Jenkins deployment configuration YAML, enter USD oc edit dc/jenkins -n <namespace> and update the value: 'jenkins:<image_stream_tag>' line. 4.3. Additional resources Adding tags to image streams Configuring periodic importing of image stream tags Jenkins agent Certified jenkins images Certified jenkins-agent-base images Certified jenkins-agent-maven images Certified jenkins-agent-nodejs images
[ "oc import-image jenkins-agent-nodejs -n openshift", "oc import-image jenkins-agent-maven -n openshift", "oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/jenkins/important-changes-to-openshift-jenkins-images
Appendix G. The Source-to-Image (S2I) build process
Appendix G. The Source-to-Image (S2I) build process Source-to-Image (S2I) is a build tool for generating reproducible Docker-formatted container images from online SCM repositories with application sources. With S2I builds, you can easily deliver the latest version of your application into production with shorter build times, decreased resource and network usage, improved security, and a number of other advantages. OpenShift supports multiple build strategies and input sources . For more information, see the Source-to-Image (S2I) Build chapter of the OpenShift Container Platform documentation. You must provide three elements to the S2I process to assemble the final container image: The application sources hosted in an online SCM repository, such as GitHub. The S2I Builder image, which serves as the foundation for the assembled image and provides the ecosystem in which your application is running. Optionally, you can also provide environment variables and parameters that are used by S2I scripts . The process injects your application source and dependencies into the Builder image according to instructions specified in the S2I script, and generates a Docker-formatted container image that runs the assembled application. For more information, check the S2I build requirements , build options and how builds work sections of the OpenShift Container Platform documentation.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/node.js_runtime_guide/the-source-to-image-s2i-build-process
3.3. NFS Share Setup
3.3. NFS Share Setup The following procedure configures the NFS share for the NFS daemon failover. You need to perform this procedure on only one node in the cluster. Create the /nfsshare directory. Mount the ext4 file system that you created in Section 3.2, "Configuring an LVM Volume with an ext4 File System" on the /nfsshare directory. Create an exports directory tree on the /nfsshare directory. Place files in the exports directory for the NFS clients to access. For this example, we are creating test files named clientdatafile1 and clientdatafile2 . Unmount the ext4 file system and deactivate the LVM volume group.
[ "mkdir /nfsshare", "mount /dev/my_vg/my_lv /nfsshare", "mkdir -p /nfsshare/exports mkdir -p /nfsshare/exports/export1 mkdir -p /nfsshare/exports/export2", "touch /nfsshare/exports/export1/clientdatafile1 touch /nfsshare/exports/export2/clientdatafile2", "umount /dev/my_vg/my_lv vgchange -an my_vg" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-NFSsharesetup-HAAA
function::stringat
function::stringat Name function::stringat - Returns the char at a given position in the string. Synopsis Arguments str The string to fetch the character from. pos The position to get the character from. 0 = start of the string. General Syntax stringat:long(srt:string, pos:long) Description This function returns the character at a given position in the string or zero if thestring doesn't have as many characters.
[ "function stringat:long(str:string,pos:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-stringat
Chapter 1. Introduction to the IdM command-line utilities
Chapter 1. Introduction to the IdM command-line utilities Learn more about the basics of using the Identity Management (IdM) command-line utilities. Prerequisites Installed and accessible IdM server. For details, see Installing Identity Management . To use the IPA command-line interface, authenticate to IdM with a valid Kerberos ticket. For details about obtaining a valid Kerberos ticket, see Logging in to Identity Management from the command line . 1.1. What is the IPA command-line interface The IPA command-line interface (CLI) is the basic command-line interface for Identity Management (IdM) administration. It supports a lot of subcommands for managing IdM, such as the ipa user-add command to add a new user. IPA CLI allows you to: Add, manage, or remove users, groups, hosts and other objects in the network. Manage certificates. Search entries. Display and list objects. Set access rights. Get help with the correct command syntax. 1.2. What is the IPA help The IPA help is a built-in documentation system for the IdM server. The IPA command-line interface (CLI) generates available help topics from loaded IdM plugin modules. To use the IPA help utility, you must: Have an IdM server installed and running. Be authenticated with a valid Kerberos ticket. Entering the ipa help command without options displays information about basic help usage and the most common command examples. You can use the following options for different ipa help use cases: [] - Brackets mean that all parameters are optional and you can write just ipa help and the command will be executed. | - The pipe character means or . Therefore, you can specify a TOPIC , a COMMAND , or topics , or commands , with the basic ipa help command: topics - You can run the command ipa help topics to display a list of topics that are covered by the IPA help, such as user , cert , server and many others. TOPIC - The TOPIC with capital letters is a variable. Therefore, you can specify a particular topic, for example, ipa help user . commands - You can enter the command ipa help commands to display a list of commands which are covered by the IPA help, for example, user-add , ca-enable , server-show and many others. COMMAND - The COMMAND with capital letters is a variable. Therefore, you can specify a particular command, for example, ipa help user-add . 1.3. Using IPA help topics The following procedure describes how to use the IPA help on the command line. Procedure Open a terminal and connect to the IdM server. Enter ipa help topics to display a list of topics covered by help. Select one of the topics and create a command according to the following pattern: ipa help [topic_name] . Instead of the topic_name string, add one of the topics you listed in the step. In the example, we use the following topic: user If the IPA help output is too long and you cannot see the whole text, use the following syntax: You can then scroll down and read the whole help. The IPA CLI displays a help page for the user topic. After reading the overview, you can see many examples with patterns for working with topic commands. 1.4. Using IPA help commands The following procedure describes how to create IPA help commands on the command line. Procedure Open a terminal and connect to the IdM server. Enter ipa help commands to display a list of commands covered by help. Select one of the commands and create a help command according to the following pattern: ipa help <COMMAND> . Instead of the <COMMAND> string, add one of the commands you listed in the step. Additional resources ipa man page on your system 1.5. Structure of IPA commands The IPA CLI distinguishes the following types of commands: Built-in commands - Built-in commands are all available in the IdM server. Plug-in provided commands The structure of IPA commands allows you to manage various types of objects. For example: Users, Hosts, DNS records, Certificates, and many others. For most of these objects, the IPA CLI includes commands to: Add ( add ) Modify ( mod ) Delete ( del ) Search ( find ) Display ( show ) Commands have the following structure: ipa user-add , ipa user-mod , ipa user-del , ipa user-find , ipa user-show ipa host-add , ipa host-mod , ipa host-del , ipa host-find , ipa host-show ipa dnsrecord-add , ipa dnsrecord-mod , ipa dnsrecord-del , ipa dnsrecord-find , ipa dnrecord-show You can create a user with the ipa user-add [options] , where [options] are optional. If you use just the ipa user-add command, the script asks you for details one by one. To change an existing object, you need to define the object, therefore the command also includes an object: ipa user-mod USER_NAME [options] . 1.6. Using an IPA command to add a user account to IdM The following procedure describes how to add a new user to the Identity Management (IdM) database using the command line. Prerequisites You need to have administrator privileges to add user accounts to the IdM server. Procedure Open a terminal and connect to the IdM server. Enter the command for adding a new user: The command runs a script that prompts you to provide basic data necessary for creating a user account. In the First name: field, enter the first name of the new user and press the Enter key. In the Last name: field, enter the last name of the new user and press the Enter key. In the User login [suggested user name]: enter the user name, or just press the Enter key to accept the suggested user name. The user name must be unique for the whole IdM database. If an error occurs because that user name already exists, repeat the process with the ipa user-add command and use a different, unique user name. After you add the user name, the user account is added to the IdM database and the IPA command-line interface (CLI) prints the following output: Note By default, a user password is not set for the user account. To add a password while creating a user account, use the ipa user-add command with the following syntax: The IPA CLI then prompts you to add or confirm a user name and password. If the user has been created already, you can add the password with the ipa user-mod command. Additional resources Run the ipa help user-add command for more information about parameters. 1.7. Using an IPA command to modify a user account in IdM You can change many parameters for each user account. For example, you can add a new password to the user. Basic command syntax is different from the user-add syntax because you need to define the existing user account for which you want to perform changes, for example, add a password. Prerequisites You need to have administrator privileges to modify user accounts. Procedure Open a terminal and connect to the IdM server. Enter the ipa user-mod command, specify the user to modify, and any options, such as --password for adding a password: The command runs a script where you can add the new password. Enter the new password and press the Enter key. The IPA CLI prints the following output: The user password is now set for the account and the user can log into IdM. Additional resources Run the ipa help user-mod command for more information about parameters. 1.8. How to supply a list of values to the IdM utilities Identity Management (IdM) stores values for multi-valued attributes in lists. IdM supports the following methods of supplying multi-valued lists: Using the same command-line argument multiple times within the same command invocation: Alternatively, you can enclose the list in curly braces, in which case the shell performs the expansion: The examples above show a command permission-add which adds permissions to an object. The object is not mentioned in the example. Instead of ... you need to add the object for which you want to add permissions. When you update such multi-valued attributes from the command line, IdM completely overwrites the list of values with a new list. Therefore, when updating a multi-valued attribute, you must specify the whole new list, not just a single value you want to add. For example, in the command above, the list of permissions includes reading, writing and deleting. When you decide to update the list with the permission-mod command, you must add all values, otherwise those not mentioned will be deleted. Example 1: - The ipa permission-mod command updates all previously added permissions. or Example 2 - The ipa permission-mod command deletes the --right=delete argument because it is not included in the command: or 1.9. How to use special characters with the IdM utilities When passing command-line arguments that include special characters to the ipa commands, escape these characters with a backslash (\). For example, common special characters include angle brackets (< and >), ampersand (&), asterisk (*), or vertical bar (|). For example, to escape an asterisk (*): Commands containing unescaped special characters do not work as expected because the shell cannot properly parse such characters.
[ "ipa help [TOPIC | COMMAND | topics | commands]", "ipa help topics", "ipa help user", "ipa help user | less", "ipa help commands", "ipa help user-add", "ipa user-add", "---------------------- Added user \"euser\" ---------------------- User login: euser First name: Example Last name: User Full name: Example User Display name: Example User Initials: EU Home directory: /home/euser GECOS: Example User Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 427200006 GID: 427200006 Password: False Member of groups: ipausers Kerberos keys available: False", "ipa user-add --first=Example --last=User --password", "ipa user-mod euser --password", "---------------------- Modified user \"euser\" ---------------------- User login: euser First name: Example Last name: User Home directory: /home/euser Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 427200006 GID: 427200006 Password: True Member of groups: ipausers Kerberos keys available: True", "ipa permission-add --right=read --permissions=write --permissions=delete", "ipa permission-add --right={read,write,delete}", "ipa permission-mod --right=read --right=write --right=delete", "ipa permission-mod --right={read,write,delete}", "ipa permission-mod --right=read --right=write", "ipa permission-mod --right={read,write}", "ipa certprofile-show certificate_profile --out= exported\\*profile.cfg" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/introduction-to-the-ipa-command-line-utilities_managing-users-groups-hosts
Chapter 5. Creating continuous queries
Chapter 5. Creating continuous queries Applications can register listeners to receive continual updates about cache entries that match query filters. 5.1. Continuous queries Continuous queries provide applications with real-time notifications about data in Data Grid caches that are filtered by queries. When entries match the query Data Grid sends the updated data to any listeners, which provides a stream of events instead of applications having to execute the query. Continuous queries can notify applications about incoming matches, for values that have joined the set; updated matches, for matching values that were modified and continue to match; and outgoing matches, for values that have left the set. For example, continuous queries can notify applications about all: Persons with an age between 18 and 25, assuming the Person entity has an age property and is updated by the user application. Transactions for dollar amounts larger than USD2000. Times where the lap speed of F1 racers were less than 1:45.00 seconds, assuming the cache contains Lap entries and that laps are entered during the race. Note Continuous queries can use all query capabilities except for grouping, aggregation, and sorting operations. How continuous queries work Continuous queries notify client listeners with the following events: Join A cache entry matches the query. Update A cache entry that matches the query is updated and still matches the query. Leave A cache entry no longer matches the query. When a client registers a continuous query listener it immediately receives Join events for any entries that match the query. Client listeners receive subsequent events each time a cache operation modifies entries that match the query. Data Grid determines when to send Join , Update , or Leave events to client listeners as follows: If the query on both the old and new value does not match, Data Grid does not sent an event. If the query on the old value does not match but the new value does, Data Grid sends a Join event. If the query on both the old and new values match, Data Grid sends an Update event. If the query on the old value matches but the new value does not, Data Grid sends a Leave event. If the query on the old value matches and the entry is then deleted or it expires, Data Grid sends a Leave event. 5.1.1. Continuous queries and Data Grid performance Continuous queries provide a constant stream of updates to applications, which can generate a significant number of events. Data Grid temporarily allocates memory for each event it generates, which can result in memory pressure and potentially lead to OutOfMemoryError exceptions, especially for remote caches. For this reason, you should carefully design your continuous queries to avoid any performance impact. Data Grid strongly recommends that you limit the scope of your continuous queries to the smallest amount of information that you need. To achieve this, you can use projections and predicates. For example, the following statement provides results about only a subset of fields that match the criteria rather than the entire entry: SELECT field1, field2 FROM Entity WHERE x AND y It is also important to ensure that each ContinuousQueryListener you create can quickly process all received events without blocking threads. To achieve this, you should avoid any cache operations that generate events unnecessarily. 5.2. Creating continuous queries You can create continuous queries for remote and embedded caches. Procedure Create a Query object. Obtain the ContinuousQuery object of your cache by calling the appropriate method: Remote caches: org.infinispan.client.hotrod.Search.getContinuousQuery(RemoteCache<K, V> cache) Embedded caches: org.infinispan.query.Search.getContinuousQuery(Cache<K, V> cache) Register the query and a ContinuousQueryListener object as follows: continuousQuery.addContinuousQueryListener(query, listener); When you no longer need the continuous query, remove the listener as follows: continuousQuery.removeContinuousQueryListener(listener); Continuous query example The following code example demonstrates a simple continuous query with an embedded cache. In this example, the listener receives notifications when any Person instances under the age of 21 are added to the cache. Those Person instances are also added to the "matches" map. When the entries are removed from the cache or their age becomes greater than or equal to 21, they are removed from "matches" map. Registering a Continuous Query import org.infinispan.query.api.continuous.ContinuousQuery; import org.infinispan.query.api.continuous.ContinuousQueryListener; import org.infinispan.query.Search; import org.infinispan.query.dsl.QueryFactory; import org.infinispan.query.dsl.Query; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; [...] // We have a cache of Person objects. Cache<Integer, Person> cache = ... // Create a ContinuousQuery instance on the cache. ContinuousQuery<Integer, Person> continuousQuery = Search.getContinuousQuery(cache); // Define a query. // In this example, we search for Person instances under 21 years of age. QueryFactory queryFactory = Search.getQueryFactory(cache); Query query = queryFactory.create("FROM Person p WHERE p.age < 21"); final Map<Integer, Person> matches = new ConcurrentHashMap<Integer, Person>(); // Define the ContinuousQueryListener. ContinuousQueryListener<Integer, Person> listener = new ContinuousQueryListener<Integer, Person>() { @Override public void resultJoining(Integer key, Person value) { matches.put(key, value); } @Override public void resultUpdated(Integer key, Person value) { // We do not process this event. } @Override public void resultLeaving(Integer key) { matches.remove(key); } }; // Add the listener and the query. continuousQuery.addContinuousQueryListener(query, listener); [...] // Remove the listener to stop receiving notifications. continuousQuery.removeContinuousQueryListener(listener);
[ "SELECT field1, field2 FROM Entity WHERE x AND y", "continuousQuery.addContinuousQueryListener(query, listener);", "continuousQuery.removeContinuousQueryListener(listener);", "import org.infinispan.query.api.continuous.ContinuousQuery; import org.infinispan.query.api.continuous.ContinuousQueryListener; import org.infinispan.query.Search; import org.infinispan.query.dsl.QueryFactory; import org.infinispan.query.dsl.Query; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; [...] // We have a cache of Person objects. Cache<Integer, Person> cache = // Create a ContinuousQuery instance on the cache. ContinuousQuery<Integer, Person> continuousQuery = Search.getContinuousQuery(cache); // Define a query. // In this example, we search for Person instances under 21 years of age. QueryFactory queryFactory = Search.getQueryFactory(cache); Query query = queryFactory.create(\"FROM Person p WHERE p.age < 21\"); final Map<Integer, Person> matches = new ConcurrentHashMap<Integer, Person>(); // Define the ContinuousQueryListener. ContinuousQueryListener<Integer, Person> listener = new ContinuousQueryListener<Integer, Person>() { @Override public void resultJoining(Integer key, Person value) { matches.put(key, value); } @Override public void resultUpdated(Integer key, Person value) { // We do not process this event. } @Override public void resultLeaving(Integer key) { matches.remove(key); } }; // Add the listener and the query. continuousQuery.addContinuousQueryListener(query, listener); [...] // Remove the listener to stop receiving notifications. continuousQuery.removeContinuousQueryListener(listener);" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/querying_data_grid_caches/query-continuous
Chapter 5. Migrating virtual machines from the command line
Chapter 5. Migrating virtual machines from the command line You can migrate virtual machines to OpenShift Virtualization from the command line. Important You must ensure that all prerequisites are met. 5.1. Permissions needed by non-administrators to work with migration plan components If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans). By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions. For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans: Table 5.1. Example migration plan roles and their privileges Role Description plans.forklift.konveyor.io-v1beta1-view Can view migration plans but not to create, delete or modify them plans.forklift.konveyor.io-v1beta1-edit Can create, delete or modify (all parts of edit permissions) individual migration plans plans.forklift.konveyor.io-v1beta1-admin All edit privileges and the ability to delete the entire collection of migration plans Note that pre-defined cluster roles include a resource (for example, plans ), an API group (for example, forklift.konveyor.io-v1beta1 ) and an action (for example, view , edit ). As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace: Create and modify storage maps, network maps, and migration plans for the namespaces they have access to Attach providers created by administrators to storage maps, network maps, and migration plans Not be able to create providers or to change system settings Table 5.2. Example permissions required for non-adminstrators to work with migration plan components but not create providers Actions API group Resource get , list , watch , create , update , patch , delete forklift.konveyor.io plans get , list , watch , create , update , patch , delete forklift.konveyor.io migrations get , list , watch , create , update , patch , delete forklift.konveyor.io hooks get , list , watch forklift.konveyor.io providers get , list , watch , create , update , patch , delete forklift.konveyor.io networkmaps get , list , watch , create , update , patch , delete forklift.konveyor.io storagemaps get , list , watch forklift.konveyor.io forkliftcontrollers create , patch , delete Empty string secrets Note Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map. 5.2. Retrieving a VMware vSphere moRef When you migrate VMs with a VMware vSphere source provider using Migration Toolkit for Virtualization (MTV) from the CLI, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs. You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity. Procedure Retrieve the routes for the project: oc get route -n openshift-mtv Retrieve the Inventory service route: USD oc get route <inventory_service> -n openshift-mtv Retrieve the access token: USD TOKEN=USD(oc whoami -t) Retrieve the moRef of a VMware vSphere provider: USD curl -H "Authorization: Bearer USDTOKEN" https://<inventory_service_route>/providers/vsphere -k Retrieve the datastores of a VMware vSphere source provider: USD curl -H "Authorization: Bearer USDTOKEN" https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k Example output [ { "id": "datastore-11", "parent": { "kind": "Folder", "id": "group-s5" }, "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC", "revision": 46, "name": "v2v_general_porpuse_ISCSI_DC", "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11" }, { "id": "datastore-730", "parent": { "kind": "Folder", "id": "group-s5" }, "path": "/Datacenter/datastore/f01-h27-640-SSD_2", "revision": 46, "name": "f01-h27-640-SSD_2", "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730" }, ... In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730 . 5.3. Migrating virtual machines You migrate virtual machines (VMs) from the command line (CLI) by creating MTV custom resources (CRs). The CRs and the migration procedure vary by source provider. Important You must specify a name for cluster-scoped CRs. You must specify both a name and a namespace for namespace-scoped CRs. To migrate to or from an OpenShift cluster that is different from the one the migration plan is defined on, you must have an OpenShift Virtualization service account token with cluster-admin privileges. 5.3.1. Migrating from a VMware vSphere source provider You can migrate from a VMware vSphere source provider by using the CLI. Procedure Create a Secret manifest for the source provider credentials: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: vsphere createdForResourceType: providers type: Opaque stringData: user: <user> 2 password: <password> 3 insecureSkipVerify: <"true"/"false"> 4 cacert: | 5 <ca_certificate> url: <api_end_point> 6 EOF 1 The ownerReferences section is optional. 2 Specify the vCenter user or the ESX/ESXi user. 3 Specify the password of the vCenter user or the ESX/ESXi user. 4 Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. 5 When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA. 6 Specify the API endpoint URL of the vCenter or the ESX/ESXi, for example, https://<vCenter_host>/sdk . Create a Provider manifest for the source provider: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: vsphere url: <api_end_point> 1 settings: vddkInitImage: <VDDK_image> 2 sdkEndpoint: vcenter 3 secret: name: <secret> 4 namespace: <namespace> EOF 1 Specify the URL of the API endpoint, for example, https://<vCenter_host>/sdk . 2 Optional, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created. 3 Options: vcenter or esxi . 4 Specify the name of the provider Secret CR. Create a Host manifest: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Host metadata: name: <vmware_host> namespace: <namespace> spec: provider: namespace: <namespace> name: <source_provider> 1 id: <source_host_mor> 2 ipAddress: <source_network_ip> 3 EOF 1 Specify the name of the VMware vSphere Provider CR. 2 Specify the Managed Object Reference (moRef) of the VMware vSphere host. To retrieve the moRef, see Retrieving a VMware vSphere moRef . 3 Specify the IP address of the VMware vSphere migration network. Create a NetworkMap manifest to map the source and destination networks: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: 2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are pod and multus . 2 You can use either the id or the name parameter to specify the source network. For id , specify the VMware vSphere network Managed Object Reference (moRef). To retrieve the moRef, see Retrieving a VMware vSphere moRef . 3 Specify a network attachment definition for each additional OpenShift Virtualization network. 4 Required only when type is multus . Specify the namespace of the OpenShift Virtualization network attachment definition. Create a StorageMap manifest to map source and destination storage: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: id: <source_datastore> 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are ReadWriteOnce and ReadWriteMany . 2 Specify the VMware vSphere datastore moRef. For example, f2737930-b567-451a-9ceb-2887f6207009 . To retrieve the moRef, see Retrieving a VMware vSphere moRef . Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF where: playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner . Note You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook. Create a Plan manifest for the migration: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> spec: warm: false 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 3 network: 4 name: <network_map> 5 namespace: <namespace> storage: 6 name: <storage_map> 7 namespace: <namespace> targetNamespace: <target_namespace> vms: 8 - id: <source_vm> 9 - name: <source_vm> hooks: 10 - hook: namespace: <namespace> name: <hook> 11 step: <step> 12 EOF 1 Specify the name of the Plan CR. 2 Specify whether the migration is warm - true - or cold - false . If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run. 3 Specify only one network map and one storage map per plan. 4 Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 5 Specify the name of the NetworkMap CR. 6 Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 7 Specify the name of the StorageMap CR. 8 You can use either the id or the name parameter to specify the source VMs. 9 Specify the VMware vSphere VM moRef. To retrieve the moRef, see Retrieving a VMware vSphere moRef . 10 Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step. 11 Specify the name of the Hook CR. 12 Allowed values are PreHook , before the migration plan starts, or PostHook , after the migration is complete. Create a Migration manifest to run the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF Note If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00 . 5.3.2. Migrating from a Red Hat Virtualization source provider You can migrate from a Red Hat Virtualization (RHV) source provider by using the CLI. Prerequisites If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage. Note Unlike disk images that are copied from a source provider to a target provider, LUNs are detached , but not removed , from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider. LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption. Procedure Create a Secret manifest for the source provider credentials: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ovirt createdForResourceType: providers type: Opaque stringData: user: <user> 2 password: <password> 3 insecureSkipVerify: <"true"/"false"> 4 cacert: | 5 <ca_certificate> url: <api_end_point> 6 EOF 1 The ownerReferences section is optional. 2 Specify the RHV Manager user. 3 Specify the user password. 4 Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. 5 Enter the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Manager Apache CA certificate. You can retrieve the Manager CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA . 6 Specify the API endpoint URL, for example, https://<engine_host>/ovirt-engine/api . Create a Provider manifest for the source provider: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ovirt url: <api_end_point> 1 secret: name: <secret> 2 namespace: <namespace> EOF 1 Specify the URL of the API endpoint, for example, https://<engine_host>/ovirt-engine/api . 2 Specify the name of provider Secret CR. Create a NetworkMap manifest to map the source and destination networks: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: 2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are pod and multus . 2 You can use either the id or the name parameter to specify the source network. For id , specify the RHV network Universal Unique ID (UUID). 3 Specify a network attachment definition for each additional OpenShift Virtualization network. 4 Required only when type is multus . Specify the namespace of the OpenShift Virtualization network attachment definition. Create a StorageMap manifest to map source and destination storage: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: id: <source_storage_domain> 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are ReadWriteOnce and ReadWriteMany . 2 Specify the RHV storage domain UUID. For example, f2737930-b567-451a-9ceb-2887f6207009 . Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF where: playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner . Note You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook. Create a Plan manifest for the migration: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> preserveClusterCpuModel: true 2 spec: warm: false 3 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 4 network: 5 name: <network_map> 6 namespace: <namespace> storage: 7 name: <storage_map> 8 namespace: <namespace> targetNamespace: <target_namespace> vms: 9 - id: <source_vm> 10 - name: <source_vm> hooks: 11 - hook: namespace: <namespace> name: <hook> 12 step: <step> 13 EOF 1 Specify the name of the Plan CR. 2 See note below. 3 Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run. 4 Specify only one network map and one storage map per plan. 5 Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 6 Specify the name of the NetworkMap CR. 7 Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 8 Specify the name of the StorageMap CR. 9 You can use either the id or the name parameter to specify the source VMs. 10 Specify the RHV VM UUID. 11 Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step. 12 Specify the name of the Hook CR. 13 Allowed values are PreHook , before the migration plan starts, or PostHook , after the migration is complete. Note If the migrated machines is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of preserveClusterCpuModel . If the migrated machine is not set with a custom CPU model: If preserveClusterCpuModel is set to 'true`, MTV checks the CPU model of the VM when it runs in RHV, based on the cluster's configuration, and then sets the migrated VM with that CPU model. If preserveClusterCpuModel is set to 'false`, MTV does not set a CPU type and the VM is set with the default CPU model of the destination cluster. Create a Migration manifest to run the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF Note If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00 . 5.3.3. Migrating from an OpenStack source provider You can migrate from an OpenStack source provider by using the CLI. Procedure Create a Secret manifest for the source provider credentials: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openstack createdForResourceType: providers type: Opaque stringData: user: <user> 2 password: <password> 3 insecureSkipVerify: <"true"/"false"> 4 domainName: <domain_name> projectName: <project_name> regionName: <region_name> cacert: | 5 <ca_certificate> url: <api_end_point> 6 EOF 1 The ownerReferences section is optional. 2 Specify the OpenStack user. 3 Specify the user OpenStack password. 4 Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. 5 When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA. 6 Specify the API endpoint URL, for example, https://<identity_service>/v3 . Create a Provider manifest for the source provider: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openstack url: <api_end_point> 1 secret: name: <secret> 2 namespace: <namespace> EOF 1 Specify the URL of the API endpoint. 2 Specify the name of provider Secret CR. Create a NetworkMap manifest to map the source and destination networks: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: 2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are pod and multus . 2 You can use either the id or the name parameter to specify the source network. For id , specify the OpenStack network UUID. 3 Specify a network attachment definition for each additional OpenShift Virtualization network. 4 Required only when type is multus . Specify the namespace of the OpenShift Virtualization network attachment definition. Create a StorageMap manifest to map source and destination storage: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: id: <source_volume_type> 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are ReadWriteOnce and ReadWriteMany . 2 Specify the OpenStack volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009 . Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF where: playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner . Note You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook. Create a Plan manifest for the migration: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 2 network: 3 name: <network_map> 4 namespace: <namespace> storage: 5 name: <storage_map> 6 namespace: <namespace> targetNamespace: <target_namespace> vms: 7 - id: <source_vm> 8 - name: <source_vm> hooks: 9 - hook: namespace: <namespace> name: <hook> 10 step: <step> 11 EOF 1 Specify the name of the Plan CR. 2 Specify only one network map and one storage map per plan. 3 Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 4 Specify the name of the NetworkMap CR. 5 Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 6 Specify the name of the StorageMap CR. 7 You can use either the id or the name parameter to specify the source VMs. 8 Specify the OpenStack VM UUID. 9 Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step. 10 Specify the name of the Hook CR. 11 Allowed values are PreHook , before the migration plan starts, or PostHook , after the migration is complete. Create a Migration manifest to run the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF Note If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00 . 5.3.4. Migrating from an Open Virtual Appliance (OVA) source provider You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the CLI. Procedure Create a Secret manifest for the source provider credentials: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ova createdForResourceType: providers type: Opaque stringData: url: <nfs_server:/nfs_path> 2 EOF 1 The ownerReferences section is optional. 2 where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored. Create a Provider manifest for the source provider: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ova url: <nfs_server:/nfs_path> 1 secret: name: <secret> 2 namespace: <namespace> EOF 1 where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored. 2 Specify the name of provider Secret CR. Create a NetworkMap manifest to map the source and destination networks: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: id: <source_network_id> 2 - destination: name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 type: multus source: id: <source_network_id> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are pod and multus . 2 Specify the OVA network Universal Unique ID (UUID). 3 Specify a network attachment definition for each additional OpenShift Virtualization network. 4 Required only when type is multus . Specify the namespace of the OpenShift Virtualization network attachment definition. Create a StorageMap manifest to map source and destination storage: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: name: Dummy storage for source provider <provider_name> 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are ReadWriteOnce and ReadWriteMany . 2 For OVA, the StorageMap can map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF where: playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner . Note You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook. Create a Plan manifest for the migration: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 2 network: 3 name: <network_map> 4 namespace: <namespace> storage: 5 name: <storage_map> 6 namespace: <namespace> targetNamespace: <target_namespace> vms: 7 - id: <source_vm> 8 - name: <source_vm> hooks: 9 - hook: namespace: <namespace> name: <hook> 10 step: <step> 11 EOF 1 Specify the name of the Plan CR. 2 Specify only one network map and one storage map per plan. 3 Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 4 Specify the name of the NetworkMap CR. 5 Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 6 Specify the name of the StorageMap CR. 7 You can use either the id or the name parameter to specify the source VMs. 8 Specify the OVA VM UUID. 9 Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step. 10 Specify the name of the Hook CR. 11 Allowed values are PreHook , before the migration plan starts, or PostHook , after the migration is complete. Create a Migration manifest to run the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF Note If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00 . 5.3.5. Migrating from a Red Hat OpenShift Virtualization source provider You can use a Red Hat OpenShift Virtualization provider as either a source provider or as a destination provider. Procedure Create a Secret manifest for the source provider credentials: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openshift createdForResourceType: providers type: Opaque stringData: token: <token> 2 password: <password> 3 insecureSkipVerify: <"true"/"false"> 4 cacert: | 5 <ca_certificate> url: <api_end_point> 6 EOF 1 The ownerReferences section is optional. 2 Specify a token for a service account with cluster-admin privileges. If both token and url are left blank, the local OpenShift cluster is used. 3 Specify the user password. 4 Specify "true" to skip certificate verification, specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. 5 When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA. 6 Specify the URL of the endpoint of the API server. Create a Provider manifest for the source provider: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openshift url: <api_end_point> 1 secret: name: <secret> 2 namespace: <namespace> EOF 1 Specify the URL of the endpoint of the API server. 2 Specify the name of provider Secret CR. Create a NetworkMap manifest to map the source and destination networks: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: name: <network_name> type: pod - destination: name: <network_attachment_definition> 2 namespace: <network_attachment_definition_namespace> 3 type: multus source: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are pod and multus . 2 Specify a network attachment definition for each additional OpenShift Virtualization network. Specify the namespace either by using the namespace property or with a name built as follows: <network_namespace>/<network_name> . 3 Required only when type is multus . Specify the namespace of the OpenShift Virtualization network attachment definition. Create a StorageMap manifest to map source and destination storage: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: name: <storage_class> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF 1 Allowed values are ReadWriteOnce and ReadWriteMany . Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF where: playbook refers to an optional Base64-encoded Ansible Playbook. If you specify a playbook, the image must be hook-runner . Note You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook. Create a Plan manifest for the migration: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 2 network: 3 name: <network_map> 4 namespace: <namespace> storage: 5 name: <storage_map> 6 namespace: <namespace> targetNamespace: <target_namespace> vms: - name: <source_vm> namespace: <namespace> hooks: 7 - hook: namespace: <namespace> name: <hook> 8 step: <step> 9 EOF 1 Specify the name of the Plan CR. 2 Specify only one network map and one storage map per plan. 3 Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case. 4 Specify the name of the NetworkMap CR. 5 Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case. 6 Specify the name of the StorageMap CR. 7 Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step. 8 Specify the name of the Hook CR. 9 Allowed values are PreHook , before the migration plan starts, or PostHook , after the migration is complete. Create a Migration manifest to run the Plan CR: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF Note If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00 . 5.4. Canceling a migration You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI). Canceling an entire migration Delete the Migration CR: USD oc delete migration <migration> -n <namespace> 1 1 Specify the name of the Migration CR. Canceling the migration of individual VMs Add the individual VMs to the spec.cancel block of the Migration manifest: USD cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> ... spec: cancel: - id: vm-102 1 - id: vm-203 - name: rhel8-vm EOF 1 You can specify a VM by using the id key or the name key. The value of the id key is the managed object reference , for a VMware VM, or the VM UUID , for a RHV VM. Retrieve the Migration CR to monitor the progress of the remaining VMs: USD oc get migration/<migration> -n <namespace> -o yaml
[ "get route -n openshift-mtv", "oc get route <inventory_service> -n openshift-mtv", "TOKEN=USD(oc whoami -t)", "curl -H \"Authorization: Bearer USDTOKEN\" https://<inventory_service_route>/providers/vsphere -k", "curl -H \"Authorization: Bearer USDTOKEN\" https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k", "[ { \"id\": \"datastore-11\", \"parent\": { \"kind\": \"Folder\", \"id\": \"group-s5\" }, \"path\": \"/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC\", \"revision\": 46, \"name\": \"v2v_general_porpuse_ISCSI_DC\", \"selfLink\": \"providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11\" }, { \"id\": \"datastore-730\", \"parent\": { \"kind\": \"Folder\", \"id\": \"group-s5\" }, \"path\": \"/Datacenter/datastore/f01-h27-640-SSD_2\", \"revision\": 46, \"name\": \"f01-h27-640-SSD_2\", \"selfLink\": \"providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730\" },", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: vsphere createdForResourceType: providers type: Opaque stringData: user: <user> 2 password: <password> 3 insecureSkipVerify: <\"true\"/\"false\"> 4 cacert: | 5 <ca_certificate> url: <api_end_point> 6 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: vsphere url: <api_end_point> 1 settings: vddkInitImage: <VDDK_image> 2 sdkEndpoint: vcenter 3 secret: name: <secret> 4 namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Host metadata: name: <vmware_host> namespace: <namespace> spec: provider: namespace: <namespace> name: <source_provider> 1 id: <source_host_mor> 2 ipAddress: <source_network_ip> 3 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: 2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: id: <source_datastore> 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> spec: warm: false 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 3 network: 4 name: <network_map> 5 namespace: <namespace> storage: 6 name: <storage_map> 7 namespace: <namespace> targetNamespace: <target_namespace> vms: 8 - id: <source_vm> 9 - name: <source_vm> hooks: 10 - hook: namespace: <namespace> name: <hook> 11 step: <step> 12 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ovirt createdForResourceType: providers type: Opaque stringData: user: <user> 2 password: <password> 3 insecureSkipVerify: <\"true\"/\"false\"> 4 cacert: | 5 <ca_certificate> url: <api_end_point> 6 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ovirt url: <api_end_point> 1 secret: name: <secret> 2 namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: 2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: id: <source_storage_domain> 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> preserveClusterCpuModel: true 2 spec: warm: false 3 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 4 network: 5 name: <network_map> 6 namespace: <namespace> storage: 7 name: <storage_map> 8 namespace: <namespace> targetNamespace: <target_namespace> vms: 9 - id: <source_vm> 10 - name: <source_vm> hooks: 11 - hook: namespace: <namespace> name: <hook> 12 step: <step> 13 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openstack createdForResourceType: providers type: Opaque stringData: user: <user> 2 password: <password> 3 insecureSkipVerify: <\"true\"/\"false\"> 4 domainName: <domain_name> projectName: <project_name> regionName: <region_name> cacert: | 5 <ca_certificate> url: <api_end_point> 6 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openstack url: <api_end_point> 1 secret: name: <secret> 2 namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: 2 id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: id: <source_volume_type> 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 2 network: 3 name: <network_map> 4 namespace: <namespace> storage: 5 name: <storage_map> 6 namespace: <namespace> targetNamespace: <target_namespace> vms: 7 - id: <source_vm> 8 - name: <source_vm> hooks: 9 - hook: namespace: <namespace> name: <hook> 10 step: <step> 11 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: ova createdForResourceType: providers type: Opaque stringData: url: <nfs_server:/nfs_path> 2 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: ova url: <nfs_server:/nfs_path> 1 secret: name: <secret> 2 namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: id: <source_network_id> 2 - destination: name: <network_attachment_definition> 3 namespace: <network_attachment_definition_namespace> 4 type: multus source: id: <source_network_id> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: name: Dummy storage for source provider <provider_name> 2 provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 2 network: 3 name: <network_map> 4 namespace: <namespace> storage: 5 name: <storage_map> 6 namespace: <namespace> targetNamespace: <target_namespace> vms: 7 - id: <source_vm> 8 - name: <source_vm> hooks: 9 - hook: namespace: <namespace> name: <hook> 10 step: <step> 11 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: <namespace> ownerReferences: 1 - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider name: <provider_name> uid: <provider_uid> labels: createdForProviderType: openshift createdForResourceType: providers type: Opaque stringData: token: <token> 2 password: <password> 3 insecureSkipVerify: <\"true\"/\"false\"> 4 cacert: | 5 <ca_certificate> url: <api_end_point> 6 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <source_provider> namespace: <namespace> spec: type: openshift url: <api_end_point> 1 secret: name: <secret> 2 namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: <namespace> spec: map: - destination: name: <network_name> type: pod 1 source: name: <network_name> type: pod - destination: name: <network_attachment_definition> 2 namespace: <network_attachment_definition_namespace> 3 type: multus source: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: <namespace> spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> 1 source: name: <storage_class> provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: <namespace> spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> 1 namespace: <namespace> spec: provider: source: name: <source_provider> namespace: <namespace> destination: name: <destination_provider> namespace: <namespace> map: 2 network: 3 name: <network_map> 4 namespace: <namespace> storage: 5 name: <storage_map> 6 namespace: <namespace> targetNamespace: <target_namespace> vms: - name: <source_vm> namespace: <namespace> hooks: 7 - hook: namespace: <namespace> name: <hook> 8 step: <step> 9 EOF", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <name_of_migration_cr> namespace: <namespace> spec: plan: name: <name_of_plan_cr> namespace: <namespace> cutover: <optional_cutover_time> EOF", "oc delete migration <migration> -n <namespace> 1", "cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: <namespace> spec: cancel: - id: vm-102 1 - id: vm-203 - name: rhel8-vm EOF", "oc get migration/<migration> -n <namespace> -o yaml" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/installing_and_using_the_migration_toolkit_for_virtualization/migrating-virtual-machines-from-the-command-line_mtv
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_9/proc_providing-feedback-on-red-hat-documentation_getting-started-with-dotnet-on-rhel-9
Chapter 72. XSLT
Chapter 72. XSLT Only producer is supported The XSLT component allows you to process a message using an XSLT template. This can be ideal when using Templating to generate response for requests. 72.1. URI format The URI format contains templateName , which can be one of the following: the classpath-local URI of the template to invoke the complete URL of the remote template. You can append query options to the URI in the following format: ?option=value&option=value&... Table 72.1. Table 1. Example URIs URI Description xslt:com/acme/mytransform.xsl Refers to the file com/acme/mytransform.xsl on the classpath xslt:file:///foo/bar.xsl Refers to the file /foo/bar.xsl xslt:http://acme.com/cheese/foo.xsl Refers to the remote http resource 72.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 72.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 72.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 72.3. Component Options The XSLT component supports 7 options, which are listed below. Name Description Default Type contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. URIResolver uriResolverFactory (advanced) To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. XsltUriResolverFactory 72.4. Endpoint Options The XSLT endpoint is configured using URI syntax: with the following path and query parameters: 72.4.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the template. The following is supported by the default URIResolver. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 72.4.2. Query Parameters (13 parameters) Name Description Default Type contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean deleteOutputFile (producer) If you have output=file then this option dictates whether or not the output file should be deleted when the Exchange is done processing. For example suppose the output file is a temporary file, then it can be a good idea to delete it after use. false boolean failOnNullBody (producer) Whether or not to throw an exception if the input body is null. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean output (producer) Option to specify which output type to use. Possible values are: string, bytes, DOM, file. The first three options are all in memory based, where as file is streamed directly to a java.io.File. For file you must specify the filename in the IN header with the key Exchange.XSLT_FILE_NAME which is also CamelXsltFileName. Also any paths leading to the filename must be created beforehand, otherwise an exception is thrown at runtime. Enum values: string bytes DOM file string XsltOutput transformerCacheSize (producer) The number of javax.xml.transform.Transformer object that are cached for reuse to avoid calls to Template.newTransformer(). 0 int entityResolver (advanced) To use a custom org.xml.sax.EntityResolver with javax.xml.transform.sax.SAXSource. EntityResolver errorListener (advanced) Allows to configure to use a custom javax.xml.transform.ErrorListener. Beware when doing this then the default error listener which captures any errors or fatal errors and store information on the Exchange as properties is not in use. So only use this option for special use-cases. ErrorListener resultHandlerFactory (advanced) Allows you to use a custom org.apache.camel.builder.xml.ResultHandlerFactory which is capable of using custom org.apache.camel.builder.xml.ResultHandler types. ResultHandlerFactory transformerFactory (advanced) To use a custom XSLT transformer factory. TransformerFactory transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom javax.xml.transform.URIResolver. URIResolver 72.5. Using XSLT endpoints The following format is an example of using an XSLT template to formulate a response for a message for InOut message exchanges (where there is a JMSReplyTo header) from("activemq:My.Queue"). to("xslt:com/acme/mytransform.xsl"); If you want to use InOnly and consume the message and send it to another destination you could use the following route: from("activemq:My.Queue"). to("xslt:com/acme/mytransform.xsl"). to("activemq:Another.Queue"); 72.6. Getting Useable Parameters into the XSLT By default, all headers are added as parameters which are then available in the XSLT. To make the parameters useable, you will need to declare them. <setHeader name="myParam"><constant>42</constant></setHeader> <to uri="xslt:MyTransform.xsl"/> The parameter also needs to be declared in the top level of the XSLT for it to be available: <xsl: ...... > <xsl:param name="myParam"/> <xsl:template ...> 72.7. Spring XML versions To use the above examples in Spring XML you would use something like the following code: <camelContext xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="activemq:My.Queue"/> <to uri="xslt:org/apache/camel/spring/processor/example.xsl"/> <to uri="activemq:Another.Queue"/> </route> </camelContext> 72.8. Using xsl:include Camel provides its own implementation of URIResolver . This allows Camel to load included files from the classpath. For example the include file in the following code will be located relative to the starting endpoint. <xsl:include href="staff_template.xsl"/> This means that Camel will locate the file in the classpath as org/apache/camel/component/xslt/staff_template.xsl You can use classpath: or file: to instruct Camel to look either in the classpath or file system. If you omit the prefix then Camel uses the prefix from the endpoint configuration. If no prefix is specified in the endpoint configuration, the default is classpath: . You can also refer backwards in the include paths. In the following example, the xsl file will be resolved under org/apache/camel/component . <xsl:include href="../staff_other_template.xsl"/> 72.9. Using xsl:include and default prefix Camel will use the prefix from the endpoint configuration as the default prefix. You can explicitly specify file: or classpath: loading. The two loading types can be mixed in a XSLT script, if necessary. 72.10. Dynamic stylesheets To provide a dynamic stylesheet at runtime you can define a dynamic URI. See How to use a dynamic URI in to() for more information. 72.11. Accessing warnings, errors and fatalErrors from XSLT ErrorListener Any warning/error or fatalError is stored on the current Exchange as a property with the keys Exchange.XSLT_ERROR , Exchange.XSLT_FATAL_ERROR , or Exchange.XSLT_WARNING which allows end users to get hold of any errors happening during transformation. For example in the stylesheet below, we want to terminate if a staff has an empty dob field. And to include a custom error message using xsl:message. <xsl:template match="/"> <html> <body> <xsl:for-each select="staff/programmer"> <p>Name: <xsl:value-of select="name"/><br /> <xsl:if test="dob=''"> <xsl:message terminate="yes">Error: DOB is an empty string!</xsl:message> </xsl:if> </p> </xsl:for-each> </body> </html> </xsl:template> The exception is stored on the Exchange as a warning with the key Exchange.XSLT_WARNING. 72.12. Spring Boot Auto-Configuration When using xslt with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xslt-starter</artifactId> </dependency> The component supports 8 options, which are listed below. Name Description Default Type camel.component.xslt.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.xslt.content-cache Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true Boolean camel.component.xslt.enabled Whether to enable auto configuration of the xslt component. This is enabled by default. Boolean camel.component.xslt.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.xslt.transformer-factory-class To use a custom XSLT transformer factory, specified as a FQN class name. String camel.component.xslt.transformer-factory-configuration-strategy A configuration strategy to apply on freshly created instances of TransformerFactory. The option is a org.apache.camel.component.xslt.TransformerFactoryConfigurationStrategy type. TransformerFactoryConfigurationStrategy camel.component.xslt.uri-resolver To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. The option is a javax.xml.transform.URIResolver type. URIResolver camel.component.xslt.uri-resolver-factory To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. The option is a org.apache.camel.component.xslt.XsltUriResolverFactory type. XsltUriResolverFactory
[ "xslt:templateName[?options]", "xslt:resourceUri", "from(\"activemq:My.Queue\"). to(\"xslt:com/acme/mytransform.xsl\");", "from(\"activemq:My.Queue\"). to(\"xslt:com/acme/mytransform.xsl\"). to(\"activemq:Another.Queue\");", "<setHeader name=\"myParam\"><constant>42</constant></setHeader> <to uri=\"xslt:MyTransform.xsl\"/>", "<xsl: ...... > <xsl:param name=\"myParam\"/> <xsl:template ...>", "<camelContext xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <route> <from uri=\"activemq:My.Queue\"/> <to uri=\"xslt:org/apache/camel/spring/processor/example.xsl\"/> <to uri=\"activemq:Another.Queue\"/> </route> </camelContext>", "<xsl:include href=\"staff_template.xsl\"/>", "<xsl:include href=\"../staff_other_template.xsl\"/>", "<xsl:template match=\"/\"> <html> <body> <xsl:for-each select=\"staff/programmer\"> <p>Name: <xsl:value-of select=\"name\"/><br /> <xsl:if test=\"dob=''\"> <xsl:message terminate=\"yes\">Error: DOB is an empty string!</xsl:message> </xsl:if> </p> </xsl:for-each> </body> </html> </xsl:template>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xslt-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-xslt-component-starter
14.8.11. Restore a Guest Virtual Machine
14.8.11. Restore a Guest Virtual Machine Restore a guest virtual machine previously saved with the virsh save command ( Section 14.8.7, "Save a Guest Virtual Machine" ) using virsh : This restarts the saved guest virtual machine, which may take some time. The guest virtual machine's name and UUID are preserved but are allocated for a new id. The virsh restore state-file command can take the following options: --bypass-cache - causes the restore to avoid the file system cache but note that using this option may slow down the restore operation. --xml - this option must be used with an XML file name. Although this option is usually omitted, it can be used to supply an alternative XML file for use on a restored guest virtual machine with changes only in the host-specific portions of the domain XML. For example, it can be used to account for the file naming differences in underlying storage due to disk snapshots taken after the guest was saved. --running - overrides the state recorded in the save image to start the domain as running. --paused - overrides the state recorded in the save image to start the domain as paused.
[ "virsh restore state-file" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine-restore_a_guest_virtual_machine
Chapter 13. Troubleshooting builds
Chapter 13. Troubleshooting builds Use the following to troubleshoot build issues. 13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num . To clear the annotations, enter the following commands: USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing an annotation has a - after the annotation name to be removed.
[ "requested access to the resource is denied", "oc describe quota", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/builds_using_buildconfig/troubleshooting-builds_build-configuration
Chapter 9. Red Hat Virtualization 4.3 Batch Update 7 (ovirt-4.3.10)
Chapter 9. Red Hat Virtualization 4.3 Batch Update 7 (ovirt-4.3.10) 9.1. Red Hat Virtualization Host 4.3.10 Image The following table outlines the packages included in the redhat-virtualization-host-4.3.10 image. Table 9.1. Red Hat Virtualization Host 4.3.10 Image Name Version Advisory GeoIP 1.5.0-14.el7 RHBA-2019:2224 NetworkManager 1.18.4-3.el7 RHBA-2020:1162 OpenIPMI 2.0.27-1.el7 RHBA-2019:2090 PyYAML 3.10-11.el7 RHBA-2015:1183 Red_Hat_Enterprise_Linux-Release_Notes-7-en-US 7-2.el7 RHEA-2015:2461 abrt 2.1.11-57.el7 RHBA-2020:1040 acl 2.2.51-15.el7 RHBA-2020:1023 aide 0.15.1-13.el7 RHBA-2017:2050 alsa-firmware 1.0.28-2.el7 RHBA-2015:0461 alsa-lib 1.1.8-1.el7 RHBA-2019:2204 alsa-tools 1.1.0-1.el7 RHEA-2016:2437 ansible 2.9.9-1.el7ae RHBA-2020:2139 attr 2.4.46-13.el7 RHBA-2018:0768 audit 2.8.5-4.el7 RHBA-2019:2191 augeas 1.4.0-9.el7_8.1 RHBA-2020:2087 authconfig 6.2.8-30.el7 RHSA-2017:2285 autofs 5.0.7-109.el7 RHBA-2020:1106 autogen 5.18-5.el7 RHBA-2014:17018 avahi 0.6.31-20.el7 RHSA-2020:1176 bash 4.2.46-34.el7 RHSA-2020:1113 bind 9.11.4-16.P2.el7_8.3 RHBA-2020:2086 binutils 2.27-43.base.el7_8.1 RHBA-2020:2075 biosdevname 0.7.3-2.el7 RHBA-2019:2291 boost 1.53.0-28.el7 RHBA-2020:1072 btrfs-progs 4.9.1-1.el7 RHBA-2017:2268 bzip2 1.0.6-13.el7 RHBA-2015:2156 ca-certificates 2019.2.32-76.el7_7 RHBA-2019:3970 cdrkit 1.1.11-25.el7 RHBA-2018:3109 ceph-common 10.2.5-4.el7 RHBA-2018:3189 certmonger 0.78.4-12.el7 RHBA-2020:1052 checkpolicy 2.5-8.el7 RHBA-2018:3099 chkconfig 1.7.4-1.el7 RHBA-2017:2164 chrony 3.4-1.el7 RHBA-2019:2076 clevis 7-8.el7 RHBA-2018:3298 cockpit 195.5-2.el7 RHBA-2020:1219 cockpit-ovirt 0.13.10-1.el7ev RHBA-2020:1310 collectd 5.8.1-3.el7ost RHBA-2019:3183 coolkey 1.1.0-40.el7 RHBA-2018:3263 coreutils 8.22-24.el7 RHBA-2019:2217 cpio 2.11-27.el7 RHBA-2018:0693 cracklib 2.9.0-11.el7 RHBA-2013:16149 cronie 1.4.11-23.el7 RHBA-2019:2041 cryptsetup 2.0.3-6.el7 RHBA-2020:1130 cups 1.6.3-43.el7 RHSA-2020:1050 curl 7.29.0-57.el7 RHSA-2020:1020 cyrus-sasl 2.1.26-23.el7 RHBA-2018:0777 dbus 1.10.24-13.el7_6 RHBA-2019:0509 dbus-python 1.1.1-9.el7 RHBA-2014:17319 desktop-file-utils 0.23-2.el7 RHBA-2019:2044 device-mapper-multipath 0.4.9-131.el7 RHBA-2020:1066 device-mapper-persistent-data 0.8.5-2.el7 RHBA-2020:1188 dhcp 4.2.5-79.el7 RHBA-2020:1087 diffutils 3.3-5.el7 RHBA-2019:2032 ding-libs 0.6.1-32.el7 RHBA-2018:3160 dmidecode 3.2-3.el7 RHBA-2019:2025 dmraid 1.0.0.rc16-28.el7 RHBA-2016:2552 dnsmasq 2.76-10.el7_7.1 RHBA-2019:3056 dosfstools 3.0.20-10.el7 RHBA-2018:3069 dracut 033-568.el7 RHBA-2020:1139 e2fsprogs 1.42.9-17.el7 RHBA-2020:1198 ebtables 2.0.10-16.el7 RHBA-2018:0941 efibootmgr 17-2.el7 RHEA-2018:3171 efivar 36-12.el7 RHBA-2019:2023 elfutils 0.176-4.el7 RHBA-2020:0987 emacs 24.3-23.el7 RHSA-2020:1180 ethtool 4.8-10.el7 RHEA-2019:2214 expat 2.1.0-11.el7 RHSA-2020:1011 fcoe-utils 1.0.32-2.el7 RHBA-2019:2119 fence-agents 4.2.1-30.el7 RHBA-2020:1111 fence-virt 0.3.2-14.el7 RHBA-2019:2105 file 5.11-36.el7 RHSA-2020:1022 filesystem 3.2-25.el7 RHEA-2018:0838 findutils 4.5.11-6.el7 RHBA-2018:3076 fipscheck 1.4.1-6.el7 RHBA-2017:1971 firewalld 0.6.3-8.el7_8.1 RHBA-2020:1203 freetype 2.8-14.el7 RHBA-2019:2021 fuse 2.9.2-11.el7 RHSA-2018:3324 gawk 4.0.2-4.el7_3.1 RHBA-2017:1618 gcc 4.8.5-39.el7 RHBA-2019:2167 gdb 7.6.1-119.el7 RHBA-2020:0993 gdisk 0.8.10-3.el7 RHBA-2019:2042 geoipupdate 2.5.0-1.el7 RHEA-2019:2222 gettext 0.19.8.1-17.el7 RHSA-2020:1138 glib-networking 2.56.1-1.el7 RHSA-2018:3140 glib2 2.56.1-5.el7 RHBA-2019:2044 glibc 2.17-307.el7.1 RHBA-2020:0989 gluster-ansible-cluster 1.0-1.el7rhgs RHBA-2018:3428 gluster-ansible-features 1.0.5-5.el7rhgs RHBA-2020:0289 gluster-ansible-infra 1.0.4-5.el7rhgs RHBA-2020:0289 gluster-ansible-maintenance 1.0.1-1.el7rhgs RHBA-2018:3428 gluster-ansible-repositories 1.0.1-1.el7rhgs RHBA-2019:2557 gluster-ansible-roles 1.0.5-7.el7rhgs RHBA-2020:0289 glusterfs 6.0-30.1.el7rhgs RHBA-2020:0778 gmp 6.0.0-15.el7 RHBA-2017:2069 gnupg2 2.0.22-5.el7_5 RHSA-2018:2181 gnutls 3.3.29-9.el7_6 RHBA-2019:0518 gobject-introspection 1.56.1-1.el7 RHSA-2018:3140 gofer 2.12.5-7.el7sat RHBA-2020:1455 gperftools 2.6.1-1.el7 RHBA-2018:0870 grep 2.20-3.el7 RHBA-2017:2200 grub2 2.02-0.81.el7 RHBA-2020:1161 grubby 8.28-26.el7 RHBA-2019:2227 gsettings-desktop-schemas 3.28.0-3.el7 RHSA-2020:1021 gssproxy 0.7.0-28.el7 RHBA-2020:1149 gzip 1.5-10.el7 RHBA-2018:0719 hivex 1.3.10-6.9.el7 RHBA-2018:0787 hmaccalc 0.9.13-4.el7 RHBA-2013:16026 hostname 3.13-3.el7_7.1 RHBA-2019:3054 http-parser 2.7.1-8.el7_7.2 RHSA-2020:0703 hwdata 0.252-9.5.el7 RHBA-2020:1009 imgbased 1.1.15-0.1.el7ev RHBA-2020:2399 initscripts 9.49.49-1.el7 RHBA-2020:1042 insights-client 3.0.13-1.el7 RHBA-2020:1201 ioprocess 1.3.1-1.el7ev RHBA-2020:0499 iotop 0.6-4.el7 RHBA-2018:3301 ipa 4.6.6-11.el7 RHBA-2020:1083 iperf3 3.1.7-2.el7 RHEA-2017:2065 ipmitool 1.8.18-9.el7_7 RHSA-2020:0984 iproute 4.11.0-25.el7_7.2 RHBA-2019:3985 iprutils 2.4.17.1-3.el7 RHBA-2020:1031 ipset 7.1-1.el7 RHBA-2019:2158 iptables 1.4.21-34.el7 RHBA-2020:1174 iputils 20160308-10.el7 RHBA-2017:1987 ipxe 20180825-2.git133f4c.el7 RHBA-2019:2059 irqbalance 1.0.7-12.el7 RHBA-2019:2179 iscsi-initiator-utils 6.2.0.874-17.el7 RHBA-2020:1124 jansson 2.10-1.el7 RHBA-2017:2195 jose 10-1.el7 RHBA-2018:0819 json-c 0.11-4.el7_0 RHSA-2014:0703 json-glib 1.4.2-2.el7 RHSA-2018:3140 katello-host-tools 3.5.1-2.el7sat RHBA-2019:3175 kbd 1.15.5-15.el7 RHBA-2018:3219 kernel 3.10.0-1127.8.2.el7 RHSA-2020:2082 kexec-tools 2.0.15-43.el7 RHBA-2020:1077 keyutils 1.5.8-3.el7 RHEA-2013:16405 kmod 20-28.el7 RHBA-2020:1142 kmod-kvdo 6.1.3.7-5.el7 RHBA-2020:1105 krb5 1.15.1-46.el7 RHBA-2020:1029 less 458-9.el7 RHBA-2015:1521 libX11 1.6.7-2.el7 RHSA-2019:2079 libXext 1.3.3-3.el7 RHBA-2015:2082 libXfixes 5.0.3-1.el7 RHSA-2017:1865 libXxf86vm 1.1.4-1.el7 RHSA-2017:1865 libaio 0.3.109-13.el7 RHBA-2015:2162 libarchive 3.1.2-14.el7_7 RHSA-2020:0203 libblockdev 2.18-5.el7 RHBA-2020:1098 libbytesize 1.2-1.el7 RHBA-2018:0868 libcacard 2.7.0-1.el7 RHEA-2020:1159 libcap 2.22-11.el7 RHBA-2020:1171 libcap-ng 0.7.5-4.el7 RHBA-2015:2161 libcgroup 0.41-21.el7 RHSA-2019:2047 libcroco 0.6.12-4.el7 RHSA-2018:3140 libdb 5.3.21-25.el7 RHBA-2019:2121 libdrm 2.4.97-2.el7 RHEA-2019:2120 libepoxy 1.5.2-1.el7 RHSA-2018:3059 libestr 0.1.9-2.el7 RHEA-2014:16758 libfastjson 0.99.4-3.el7 RHEA-2018:3135 libffi 3.0.13-19.el7 RHBA-2020:1090 libgcrypt 1.5.3-14.el7 RHBA-2017:2006 libglvnd 1.0.1-0.8.git5baa1e5.el7 RHSA-2018:3059 libguestfs 1.40.2-9.el7 RHBA-2020:1082 libguestfs-winsupport 7.2-3.el7 RHSA-2019:2308 libidn 1.28-4.el7 RHBA-2015:2100 libiscsi 1.9.0-7.el7 RHBA-2016:2416 libjpeg-turbo 1.2.90-8.el7 RHSA-2019:2052 libldb 1.5.4-1.el7 RHBA-2020:1059 liblognorm 2.0.2-3.el7 RHEA-2018:3135 libndp 1.2-9.el7 RHEA-2019:2065 libnetfilter_conntrack 1.0.6-1.el7_3 RHBA-2017:1301 libnfsidmap 0.25-19.el7 RHBA-2018:1016 libnl3 3.2.28-4.el7 RHBA-2017:27637 libosinfo 1.1.0-5.el7 RHSA-2020:1051 libpcap 1.5.3-12.el7 RHBA-2020:1043 libpciaccess 0.14-1.el7 RHBA-2018:0736 libpng 1.5.13-7.el7_2 RHSA-2015:2596 libproxy 0.4.11-11.el7 RHBA-2018:0746 libpwquality 1.2.3-5.el7 RHBA-2018:1014 libreport 2.1.11-53.el7 RHBA-2020:1040 libseccomp 2.3.1-4.el7 RHBA-2020:1194 libselinux 2.5-15.el7 RHEA-2020:1165 libsemanage 2.5-14.el7 RHBA-2018:3088 libsepol 2.5-10.el7 RHBA-2018:3077 libssh 0.7.1-7.el7 RHBA-2018:3682 libssh2 1.8.0-3.el7 RHSA-2019:2136 libtalloc 2.1.16-1.el7 RHBA-2020:0991 libtar 1.2.11-29.el7 RHBA-2015:1014 libtasn1 4.10-1.el7 RHSA-2017:1860 libtdb 1.3.18-1.el7 RHBA-2020:1001 libteam 1.29-1.el7 RHBA-2020:1133 libtevent 0.9.39-1.el7 RHBA-2020:1056 libtirpc 0.2.4-0.16.el7 RHBA-2019:2061 libusbx 1.0.21-1.el7 RHBA-2018:0762 libuser 0.60-9.el7 RHBA-2018:1029 libvirt 4.5.0-33.el7_8.1 RHBA-2020:2089 libvirt-python 4.5.0-1.el7 RHEA-2018:3204 libxcb 1.13-1.el7 RHSA-2018:3059 libxml2 2.9.1-6.el7.4 RHSA-2020:1190 libxshmfence 1.2-1.el7 RHBA-2015:2082 libxslt 1.1.28-5.el7 RHEA-2015:0144 libyaml 0.1.4-11.el7_0 RHSA-2015:0100 linux-firmware 20191203-76.gite8a0f4c.el7 RHBA-2020:1104 lldpad 1.0.1-5.git036e314.el7 RHBA-2019:2339 llvm-private 7.0.1-1.el7 RHBA-2019:2107 lm_sensors 3.4.0-8.20160601gitf9185e5.el7 RHEA-2019:2084 logrotate 3.8.6-19.el7 RHBA-2020:1024 lshw B.02.18-14.el7 RHBA-2020:1095 lsof 4.87-6.el7 RHBA-2018:3046 lsscsi 0.27-6.el7 RHBA-2017:2001 lua 5.1.4-15.el7 RHBA-2016:2568 luksmeta 8-2.el7 RHEA-2018:3325 lvm2 2.02.186-7.el7_8.2 RHBA-2020:2100 lz4 1.7.5-3.el7 RHBA-2019:2209 lzo 2.06-8.el7 RHBA-2015:2112 m2crypto 0.21.1-17.el7 RHBA-2015:2165 mailx 12.5-19.el7 RHBA-2018:0779 make 3.82-24.el7 RHBA-2019:2056 man-db 2.6.3-11.el7 RHBA-2018:3060 mariadb 5.5.65-1.el7 RHSA-2020:1100 mdadm 4.1-4.el7 RHBA-2020:1191 memtest86+ 5.01-2.el7 RHBA-2016:2256 mesa 18.3.4-7.el7_8.1 RHBA-2020:2088 microcode_ctl 2.1-61.el7 RHEA-2020:1166 mom 0.5.12-1.el7ev RHBA-2018:1557 mozjs17 17.0.0-20.el7 RHBA-2018:0745 mpfr 3.1.1-4.el7 RHBA-2019:1014 nbdkit 1.8.0-3.el7 RHSA-2020:1167 ncurses 5.9-14.20130511.el7_4 RHBA-2017:2586 net-snmp 5.7.2-48.el7_8 RHBA-2020:1213 netcf 0.2.8-4.el7 RHBA-2017:2220 nettle 2.7.1-8.el7 RHSA-2016:2582 nfs-utils 1.3.0-0.66.el7 RHBA-2020:1154 nmap 6.40-19.el7 RHBA-2019:2072 nspr 4.21.0-1.el7 RHSA-2019:2237 nss 3.44.0-7.el7_7 RHSA-2019:4190 nss-pem 1.0.3-7.el7 RHBA-2019:2175 nss-softokn 3.44.0-8.el7_7 RHSA-2019:4190 nss-util 3.44.0-4.el7_7 RHSA-2019:4190 ntp 4.2.6p5-29.el7 RHSA-2019:2077 numactl 2.0.12-5.el7 RHBA-2020:1163 numad 0.5-18.20150602git.el7 RHBA-2018:0996 oddjob 0.31.5-4.el7 RHBA-2015:0446 openldap 2.4.44-21.el7_6 RHBA-2019:0191 opensc 0.19.0-3.el7 RHSA-2019:2154 openscap 1.2.17-9.el7 RHBA-2020:1183 openssh 7.4p1-21.el7 RHSA-2019:2143 openssl 1.0.2k-19.el7 RHSA-2019:2304 openvswitch-selinux-extra-policy 1.0-15.el7fdp RHBA-2020:0741 openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:0743 openwsman 2.6.3-6.git4391e5c.el7_6 RHSA-2019:0638 opus 1.0.2-6.el7 RHBA-2013:16232 os-prober 1.58-9.el7 RHBA-2016:2351 osinfo-db 20190805-2.el7 RHSA-2020:1021 osinfo-db-tools 1.1.0-1.el7 RHBA-2017:2113 otopi 1.8.4-1.el7ev RHBA-2019:4229 ovirt-ansible-engine-setup 1.1.9-1.el7ev RHBA-2019:1247 ovirt-ansible-hosted-engine-setup 1.0.32-1.el7ev RHBA-2019:4233 ovirt-ansible-repositories 1.1.5-1.el7ev RHBA-2019:1247 ovirt-host 4.3.5-1.el7ev RHBA-2019:4230 ovirt-host-deploy 1.8.5-1.el7ev RHBA-2020:1306 ovirt-hosted-engine-ha 2.3.6-1.el7ev RHBA-2019:4230 ovirt-hosted-engine-setup 2.3.13-1.el7ev RHBA-2020:1307 ovirt-imageio-common 1.5.3-0.el7ev RHBA-2020:0499 ovirt-imageio-daemon 1.5.3-0.el7ev RHBA-2020:0499 ovirt-node-ng 4.3.7-0.20191031.0.el7ev RHBA-2019:4231 ovirt-provider-ovn 1.2.29-1.el7ev RHBA-2020:0500 ovirt-setup-lib 1.2.0-1.el7ev RHEA-2019:1043 ovirt-vmconsole 1.0.7-3.el7ev RHBA-2019:1577 ovmf 20180508-6.gitee3198e672e2.el7 RHSA-2019:2125 ovn2.11 2.11.1-33.el7fdp RHBA-2020:0750 p11-kit 0.23.5-3.el7 RHEA-2017:1981 pam 1.1.8-23.el7 RHBA-2020:1005 pam_pkcs11 0.6.2-30.el7 RHBA-2018:3258 parted 3.1-32.el7 RHBA-2020:1186 passwd 0.79-6.el7 RHBA-2020:1058 patch 2.7.1-12.el7_7 RHSA-2019:2964 pciutils 3.5.1-3.el7 RHBA-2018:0950 pcre 8.32-17.el7 RHBA-2017:1909 pcsc-lite 1.8.8-8.el7 RHBA-2018:3257 pcsc-lite-ccid 1.4.10-15.el7 RHBA-2019:2248 perl 5.16.3-295.el7 RHBA-2020:0999 perl-Encode 2.51-7.el7 RHBA-2014:16671 perl-Getopt-Long 2.40-3.el7 RHBA-2018:0752 perl-Socket 2.010-5.el7 RHBA-2020:0997 perl-Storable 2.45-3.el7 RHBA-2013:16436 perl-Time-HiRes 1.9725-3.el7 RHBA-2013:16343 pinentry 0.8.1-17.el7 RHBA-2016:2226 pixman 0.34.0-1.el7 RHBA-2016:2293 plymouth 0.8.9-0.33.20140113.el7 RHBA-2020:1140 policycoreutils 2.5-34.el7 RHBA-2020:1157 polkit 0.112-26.el7 RHSA-2020:1135 postfix 2.10.1-9.el7 RHBA-2020:1004 procps-ng 3.3.10-27.el7 RHBA-2020:1018 psmisc 22.20-16.el7 RHBA-2019:2225 pth 2.0.7-23.el7 RHBA-2014:18505 pyOpenSSL 17.3.0-4.el7ost RHBA-2018:3633 pygobject3 3.22.0-1.el7_4.1 RHEA-2017:3254 pygpgme 0.3-9.el7 RHBA-2014:17054 pykickstart 1.99.66.21-1.el7 RHBA-2019:2074 pyliblzma 0.5.3-11.el7 RHBA-2014:16898 pyparted 3.9-15.el7 RHBA-2018:0923 python 2.7.5-88.el7 RHSA-2020:1131 python-asn1crypto 0.23.0-2.el7ost RHBA-2018:3633 python-augeas 0.5.0-2.el7 RHBA-2015:2133 python-backports 1.0-8.el7 RHBA-2015:0576 python-backports-ssl_match_hostname 3.5.0.1-1.el7 RHBA-2018:0930 python-blivet 0.61.15.75-1.el7 RHBA-2020:1060 python-cffi 1.11.2-1.el7ost RHBA-2018:3633 python-chardet 2.2.1-3.el7 RHBA-2019:2068 python-cryptography 2.1.4-3.el7ost RHBA-2018:3633 python-daemon 1.6-5.el7 RHBA-2015:0203 python-dmidecode 3.12.2-4.el7 RHBA-2020:1096 python-dns 1.12.0-4.20150617git465785f.el7 RHBA-2017:1945 python-enum34 1.0.4-1.el7 RHEA-2015:2299 python-ethtool 0.8-8.el7 RHBA-2019:2067 python-futures 3.1.1-5.el7 RHEA-2018:3162 python-gssapi 1.2.0-3.el7 RHBA-2017:2269 python-gudev 147.2-7.el7 RHBA-2013:16317 python-idna 2.5-1.el7ost RHBA-2018:3633 python-ipaddr 2.1.11-2.el7 RHBA-2019:2347 python-ipaddress 1.0.16-2.el7 RHBA-2016:2290 python-jinja2 2.7.2-4.el7 RHBA-2019:2313 python-jmespath 0.9.0-4.el7ae RHBA-2017:3119 python-jwcrypto 0.4.2-1.el7 RHEA-2018:0723 python-ldap 2.4.15-2.el7 RHBA-2015:0531 python-linux-procfs 0.4.11-4.el7 RHBA-2019:2038 python-lockfile 0.9.1-5.el7 RHBA-2015:0208 python-netaddr 0.7.19-5.el7ost RHBA-2019:1063 python-netifaces 0.10.4-3.el7 RHBA-2016:2267 python-nss 0.16.0-3.el7 RHBA-2015:2357 python-ovirt-engine-sdk4 4.3.4-1.el7ev RHBA-2020:2398 python-paramiko 2.1.1-9.el7 RHSA-2018:3347 python-passlib 1.6.5-1.1.el7 RHBA-2017:0445 python-pexpect 4.6-1.el7at RHBA-2018:2815 python-ply 3.4-11.el7 RHBA-2017:2304 python-prettytable 0.7.2-3.el7 RHBA-2017:2274 python-pthreading 0.1.3-3.el7ev RHBA-2014:0986 python-ptyprocess 0.5.2-3.el7at RHBA-2018:2815 python-pyasn1 0.1.9-7.el7 RHEA-2016:2315 python-pycparser 2.14-1.el7 RHEA-2015:2331 python-pycurl 7.19.0-19.el7 RHBA-2016:2156 python-pyudev 0.15-9.el7 RHBA-2017:2188 python-qrcode 5.0.1-1.el7 RHEA-2015:0488 python-requests 2.6.0-9.el7_8 RHBA-2020:1210 python-rpm-macros 3-32.el7 RHBA-2019:2146 python-schedutils 0.4-6.el7 RHEA-2016:2393 python-setuptools 0.9.8-7.el7 RHBA-2017:1900 python-six 1.10.0-9.el7ost RHBA-2018:3633 python-slip 0.4.0-4.el7 RHBA-2018:0728 python-subprocess32 3.2.6-14.el7 RHBA-2019:2036 python-urlgrabber 3.10-10.el7 RHBA-2020:1123 python-urllib3 1.10.2-7.el7 RHSA-2019:2272 python-webob 1.2.3-7.el7 RHBA-2017:1890 python-yubico 1.2.3-1.el7 RHBA-2015:2304 pyusb 1.0.0-0.11.b1.el7 RHEA-2015:0500 qemu-guest-agent 2.12.0-3.el7 RHBA-2019:2124 qemu-kvm-rhev 2.12.0-44.el7_8.2 RHBA-2020:2137 qpid-proton 0.28.0-2.el7 RHBA-2019:4103 quota 4.01-19.el7 RHEA-2019:2093 radvd 2.17-3.el7 RHBA-2018:3027 rdma-core 22.4-2.el7_8 RHBA-2020:1212 readline 6.2-11.el7 RHBA-2019:2208 redhat-logos 70.7.0-1.el7 RHBA-2019:2296 redhat-release-eula 7.8-0.el7 RHBA-2020:1033 redhat-release-virtualization-host 4.3.10-1.el7ev RHBA-2020:2399 redhat-rpm-config 9.1.0-88.el7 RHBA-2019:2260 redhat-support-lib-python 0.12.1-1.el7 RHBA-2020:1134 redhat-support-tool 0.12.2-1.el7 RHBA-2020:1134 rhn-client-tools 2.0.2-24.el7 RHBA-2018:3328 rhnlib 2.5.65-8.el7 RHBA-2018:3318 rhnsd 5.0.13-10.el7 RHBA-2018:0759 rhv-openvswitch 2.11-5.el7ev RHBA-2019:3008 rng-tools 6.3.1-5.el7 RHBA-2020:1202 rpcbind 0.2.0-49.el7 RHBA-2020:1168 rpm 4.11.3-43.el7 RHBA-2020:1114 rsync 3.1.2-10.el7 RHBA-2020:1046 rsyslog 8.24.0-52.el7 RHSA-2019:48952 safelease 1.0-7.el7ev RHBA-2016:1693 samba 4.10.4-11.el7_8 RHBA-2020:2095 sanlock 3.7.3-1.el7 RHEA-2019:2305 satyr 0.13-15.el7 RHBA-2018:3285 scap-security-guide 0.1.46-11.el7 RHBA-2020:1019 screen 4.1.0-0.25.20120314git3c2946.el7 RHBA-2018:0834 scrub 2.5.2-7.el7 RHBA-2017:2216 seabios 1.11.0-2.el7 RHBA-2018:0814 sed 4.2.2-6.el7 RHBA-2020:1041 selinux-policy 3.13.1-266.el7 RHBA-2020:1007 setools 3.3.8-4.el7 RHBA-2018:3091 setup 2.8.71-11.el7 RHBA-2020:1120 sg3_utils 1.37-19.el7 RHBA-2020:1093 shadow-utils 4.6-5.el7 RHBA-2019:2102 shared-mime-info 1.8-5.el7 RHSA-2020:1021 shim-signed 15-2.el7 RHBA-2019:2247 socat 1.7.3.2-2.el7 RHBA-2017:2049 sos 3.8-8.el7_8 RHBA-2020:2077 spice 0.14.0-9.el7 RHBA-2020:1170 sqlite 3.7.17-8.el7_7.1 RHSA-2020:0227 squashfs-tools 4.3-0.21.gitaae0aff4.el7 RHBA-2015:0582 sshpass 1.06-2.el7 RHBA-2018:0489 sssd 1.16.4-37.el7_8.3 RHBA-2020:2090 subscription-manager 1.24.26-3.el7_8 RHBA-2020:2101 sudo 1.8.23-9.el7 RHBA-2020:1048 supermin 5.1.19-1.el7 RHEA-2018:0792 syslinux 4.05-15.el7 RHEA-2018:3336 sysstat 10.1.5-19.el7 RHBA-2020:1200 systemd 219-73.el7_8.6 RHBA-2020:2091 tar 1.26-35.el7 RHBA-2018:3300 tcpdump 4.9.2-4.el7_7.1 RHSA-2019:3976 telnet 0.17-65.el7_8 RHSA-2020:1334 texinfo 5.1-5.el7 RHBA-2018:0823 tpm2-abrmd 1.1.0-11.el7 RHBA-2019:2265 tpm2-tools 3.0.4-3.el7 RHBA-2019:2182 tpm2-tss 1.4.0-3.el7 RHBA-2019:2264 trousers 0.3.14-2.el7 RHBA-2017:2252 tuned 2.11.0-8.el7 RHBA-2020:1008 tzdata 2020a-1.el7 RHBA-2020:1982 udisks2 2.8.4-1.el7 RHBA-2020:1099 unbound 1.6.6-3.el7 RHBA-2020:1092 unzip 6.0-21.el7 RHSA-2020:1181 usbredir 0.7.1-3.el7 RHBA-2018:0672 usermode 1.111-6.el7 RHBA-2019:2282 userspace-rcu 0.7.9-2.el7rhgs RHBA-2016:1755 util-linux 2.23.2-63.el7 RHBA-2020:1102 v2v-conversion-host 1.16.0-3.el7ev RHBA-2020:0499 vdo 6.1.3.4-4.el7 RHBA-2020:1105 vdsm 4.30.46-1.el7ev RHBA-2020:2397 vhostmd 0.5-13.el7 RHBA-2018:3296 vim 7.4.629-6.el7 RHBA-2019:2098 virt-manager 1.5.0-7.el7 RHBA-2019:2232 virt-what 1.18-4.el7 RHBA-2018:0896 virt-who 0.26.5-1.el7 RHBA-2020:0990 volume_key 0.3.9-9.el7 RHBA-2019:2243 wayland 1.15.0-1.el7 RHSA-2018:3140 wpa_supplicant 2.6-12.el7 RHSA-2018:3107 xdg-utils 1.1.0-0.17.20120809git.el7 RHBA-2016:2246 xfsprogs 4.5.0-20.el7 RHBA-2019:2216 xz 5.2.2-1.el7 RHEA-2016:2198 yum 3.4.3-167.el7 RHBA-2020:1122 yum-rhn-plugin 2.0.1-10.el7 RHBA-2018:0759 yum-utils 1.1.31-54.el7_8 RHBA-2020:2084 zip 3.0-11.el7 RHBA-2016:2294 zlib 1.2.7-18.el7 RHBA-2018:3299 9.2. Red Hat Enterprise Linux 7 Desktop - RH Common (RPMs) The following table outlines the packages included in the rhel-7-desktop-rh-common-rpms repository. Table 9.2. Red Hat Enterprise Linux 7 Desktop - RH Common (RPMs) Name Version Advisory python-ovirt-engine-sdk4 4.3.4-1.el7ev RHBA-2020:2398 9.3. Red Hat Enterprise Linux 7 for Scientific Computing - RH Common (RPMs) The following table outlines the packages included in the rhel-7-for-hpc-node-rh-common-rpms repository. Table 9.3. Red Hat Enterprise Linux 7 for Scientific Computing - RH Common (RPMs) Name Version Advisory python-ovirt-engine-sdk4 4.3.4-1.el7ev RHBA-2020:2398 9.4. Red Hat Enterprise Linux 7 Server - RH Common (RPMs) The following table outlines the packages included in the rhel-7-server-rh-common-rpms repository. Table 9.4. Red Hat Enterprise Linux 7 Server - RH Common (RPMs) Name Version Advisory python-ovirt-engine-sdk4 4.3.4-1.el7ev RHBA-2020:2398 9.5. Red Hat Virtualization Manager v4.3 (RHEL 7 Server) (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4.3-manager-rpms repository. Table 9.5. Red Hat Virtualization Manager v4.3 (RHEL 7 Server) (RPMs) Name Version Advisory java-ovirt-engine-sdk4 4.3.2-1.el7ev RHBA-2020:2398 ovirt-engine 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-backend 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-dbscripts 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-extensions-api-impl 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-extensions-api-impl-javadoc 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-health-check-bundler 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-restapi 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-setup 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-setup-base 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-setup-plugin-cinderlib 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-setup-plugin-ovirt-engine 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-setup-plugin-ovirt-engine-common 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-setup-plugin-vmconsole-proxy-helper 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-setup-plugin-websocket-proxy 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-tools 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-tools-backup 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-vmconsole-proxy-helper 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-webadmin-portal 4.3.10.3-2 RHBA-2020:2396 ovirt-engine-websocket-proxy 4.3.10.3-2 RHBA-2020:2396 python-ovirt-engine-sdk4 4.3.4-1.el7ev RHBA-2020:2398 python2-ovirt-engine-lib 4.3.10.3-2 RHBA-2020:2396 rh-postgresql10-postgresql 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-contrib 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-contrib-syspaths 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-devel 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-docs 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-libs 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-plperl 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-plpython 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-pltcl 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-server 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-server-syspaths 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-static 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-syspaths 10.12-2.el7 RHBA-2020:2396 rh-postgresql10-postgresql-test 10.12-2.el7 RHBA-2020:2396 rhvm 4.3.10.3-2 RHBA-2020:2396 rubygem-ovirt-engine-sdk4 4.3.1-1.el7ev RHBA-2020:2398 rubygem-ovirt-engine-sdk4-doc 4.3.1-1.el7ev RHBA-2020:2398 9.6. Red Hat Virtualization 4 Management Agents (for RHEL 7 Server for IBM POWER9) RPMs The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms repository. Table 9.6. Red Hat Virtualization 4 Management Agents (for RHEL 7 Server for IBM POWER9) RPMs Name Version Advisory vdsm 4.30.46-1.el7ev RHBA-2020:2397 vdsm-api 4.30.46-1.el7ev RHBA-2020:2397 vdsm-client 4.30.46-1.el7ev RHBA-2020:2397 vdsm-common 4.30.46-1.el7ev RHBA-2020:2397 vdsm-gluster 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-checkips 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-cpuflags 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-ethtool-options 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-extra-ipv4-addrs 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-fcoe 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-localdisk 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-macspoof 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-nestedvt 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-openstacknet 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-vhostmd 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-vmfex-dev 4.30.46-1.el7ev RHBA-2020:2397 vdsm-http 4.30.46-1.el7ev RHBA-2020:2397 vdsm-jsonrpc 4.30.46-1.el7ev RHBA-2020:2397 vdsm-network 4.30.46-1.el7ev RHBA-2020:2397 vdsm-python 4.30.46-1.el7ev RHBA-2020:2397 vdsm-yajsonrpc 4.30.46-1.el7ev RHBA-2020:2397 9.7. Red Hat Virtualization 4 Management Agents RHEL 7 for IBM Power (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-for-power-le-rpms repository. Table 9.7. Red Hat Virtualization 4 Management Agents RHEL 7 for IBM Power (RPMs) Name Version Advisory vdsm 4.30.46-1.el7ev RHBA-2020:2397 vdsm-api 4.30.46-1.el7ev RHBA-2020:2397 vdsm-client 4.30.46-1.el7ev RHBA-2020:2397 vdsm-common 4.30.46-1.el7ev RHBA-2020:2397 vdsm-gluster 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-checkips 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-cpuflags 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-ethtool-options 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-extra-ipv4-addrs 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-fcoe 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-localdisk 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-macspoof 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-nestedvt 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-openstacknet 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-vhostmd 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-vmfex-dev 4.30.46-1.el7ev RHBA-2020:2397 vdsm-http 4.30.46-1.el7ev RHBA-2020:2397 vdsm-jsonrpc 4.30.46-1.el7ev RHBA-2020:2397 vdsm-network 4.30.46-1.el7ev RHBA-2020:2397 vdsm-python 4.30.46-1.el7ev RHBA-2020:2397 vdsm-yajsonrpc 4.30.46-1.el7ev RHBA-2020:2397 9.8. Red Hat Virtualization 4 Management Agents for RHEL 7 (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-rpms repository. Table 9.8. Red Hat Virtualization 4 Management Agents for RHEL 7 (RPMs) Name Version Advisory vdsm 4.30.46-1.el7ev RHBA-2020:2397 vdsm-api 4.30.46-1.el7ev RHBA-2020:2397 vdsm-client 4.30.46-1.el7ev RHBA-2020:2397 vdsm-common 4.30.46-1.el7ev RHBA-2020:2397 vdsm-gluster 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-checkips 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-cpuflags 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-ethtool-options 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-extra-ipv4-addrs 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-fcoe 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-localdisk 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-macspoof 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-nestedvt 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-openstacknet 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-vhostmd 4.30.46-1.el7ev RHBA-2020:2397 vdsm-hook-vmfex-dev 4.30.46-1.el7ev RHBA-2020:2397 vdsm-http 4.30.46-1.el7ev RHBA-2020:2397 vdsm-jsonrpc 4.30.46-1.el7ev RHBA-2020:2397 vdsm-network 4.30.46-1.el7ev RHBA-2020:2397 vdsm-python 4.30.46-1.el7ev RHBA-2020:2397 vdsm-yajsonrpc 4.30.46-1.el7ev RHBA-2020:2397 9.9. Red Hat Virtualization Host 7 Build (RPMs) The following table outlines the packages included in the rhel-7-server-rhvh-4-build-rpms repository. Table 9.9. Red Hat Virtualization Host 7 Build (RPMs) Name Version Advisory imgbased 1.1.15-0.1.el7ev RHBA-2020:2399 python-imgbased 1.1.15-0.1.el7ev RHBA-2020:2399 redhat-release-virtualization-host 4.3.10-1.el7ev RHBA-2020:2399 redhat-virtualization-host-image-update 4.3.10-20200513.0.el7_8 RHBA-2020:2399 redhat-virtualization-host-image-update-placeholder 4.3.10-1.el7ev RHBA-2020:2399 9.10. Red Hat Virtualization Host 7 (RPMs) The following table outlines the packages included in the rhel-7-server-rhvh-4-rpms repository. Table 9.10. Red Hat Virtualization Host 7 (RPMs) Name Version Advisory redhat-virtualization-host-image-update 4.3.10-20200513.0.el7_8 RHBA-2020:2399 9.11. Red Hat Enterprise Linux 7 Workstation - RH Common (RPMs) The following table outlines the packages included in the rhel-7-workstation-rh-common-rpms repository. Table 9.11. Red Hat Enterprise Linux 7 Workstation - RH Common (RPMs) Name Version Advisory python-ovirt-engine-sdk4 4.3.4-1.el7ev RHBA-2020:2398 9.12. Red Hat Virtualization 4 Tools for RHEL 8 Power, little endian (RPMs) The following table outlines the packages included in the rhv-4-tools-for-rhel-8-ppc64le-rpms repository. Table 9.12. Red Hat Virtualization 4 Tools for RHEL 8 Power, little endian (RPMs) Name Version Advisory python3-ovirt-engine-sdk4 4.3.4-1.el8ev RHBA-2020:2398 9.13. Red Hat Virtualization 4 Tools for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the rhv-4-tools-for-rhel-8-x86_64-rpms repository. Table 9.13. Red Hat Virtualization 4 Tools for RHEL 8 x86_64 (RPMs) Name Version Advisory python3-ovirt-engine-sdk4 4.3.4-1.el8ev RHBA-2020:2398 rubygem-ovirt-engine-sdk4 4.3.1-1.el8ev RHBA-2020:2398 rubygem-ovirt-engine-sdk4-doc 4.3.1-1.el8ev RHBA-2020:2398
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/package_manifest/ovirt-4.3.10-1
Chapter 6. Using CPU Manager and Topology Manager
Chapter 6. Using CPU Manager and Topology Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. Topology Manager collects hints from the CPU Manager, Device Manager, and other Hint Providers to align pod resources, such as CPU, SR-IOV VFs, and other device resources, for all Quality of Service (QoS) classes on the same non-uniform memory access (NUMA) node. Topology Manager uses topology information from the collected hints to decide if a pod can be accepted or rejected on a node, based on the configured Topology Manager policy and pod resources requested. Topology Manager is useful for workloads that use hardware accelerators to support latency-critical execution and high throughput parallel computation. To use Topology Manager you must configure CPU Manager with the static policy. 6.1. Setting up CPU Manager Procedure Optional: Label a node: # oc label node perf-node.example.com cpumanager=true Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled: # oc edit machineconfigpool worker Add a label to the worker machine config pool: metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the worker for the updated kubelet.conf : # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verify that the pod is scheduled to the node that you labeled: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process: # β”œβ”€init.scope β”‚ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice β”œβ”€kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice β”‚ β”œβ”€crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope β”‚ └─32706 /pause Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice . Pods of other QoS tiers end up in child cgroups of kubepods : # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "USDi "; cat USDi ; done Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod: # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 6.2. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the cpumanager-enabled custom resource (CR): none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 6.3. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the cpumanager-enabled custom resource (CR). This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prequisites Configure the CPU Manager policy to be static . Procedure To activate Topololgy Manager: Configure the Topology Manager allocation policy in the cpumanager-enabled custom resource (CR). USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 6.4. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
[ "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "β”œβ”€init.scope β”‚ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice β”œβ”€kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice β”‚ β”œβ”€crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope β”‚ └─32706 /pause", "cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/scalability_and_performance/using-cpu-manager
17.2. Bridged Mode
17.2. Bridged Mode When using Bridged mode , all of the guest virtual machines appear within the same subnet as the host physical machine. All other physical machines on the same physical network are aware of the virtual machines, and can access the virtual machines. Bridging operates on Layer 2 of the OSI networking model. Figure 17.2. Virtual network switch in bridged mode It is possible to use multiple physical interfaces on the hypervisor by joining them together with a bond . The bond is then added to a bridge and then guest virtual machines are added onto the bridge as well. However, the bonding driver has several modes of operation, and only a few of these modes work with a bridge where virtual guest machines are in use. Warning When using bridged mode, the only bonding modes that should be used with a guest virtual machine are Mode 1, Mode 2, and Mode 4. Using modes 0, 3, 5, or 6 is likely to cause the connection to fail. Also note that Media-Independent Interface (MII) monitoring should be used to monitor bonding modes, as Address Resolution Protocol (ARP) monitoring does not work. For more information on bonding modes, see related Knowledgebase article , or the Red Hat Enterprise Linux 7 Networking Guide . For a detailed explanation of bridge_opts parameters, used to configure bridged networking mode, see the Red Hat Virtualization Administration Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-bridge-mode
Chapter 6. Enabling support for a namespace-scoped Argo Rollouts installation
Chapter 6. Enabling support for a namespace-scoped Argo Rollouts installation Red Hat OpenShift GitOps enables support for two modes of Argo Rollouts installations: Cluster-scoped installation (default): The Argo Rollouts custom resources (CRs) defined in any namespace are reconciled by the Argo Rollouts instance. As a result, you can use Argo Rollouts CR across any namespace on the cluster. Namespace-scoped installation : The Argo Rollouts instance is installed in a specific namespace and only handles an Argo Rollouts CR within the same namespace. This installation mode includes the following benefits: This mode does not require cluster-wide ClusterRole or ClusterRoleBinding permissions. You can install and use Argo Rollouts within a single namespace without requiring cluster permissions. This mode provides security benefits by limiting the cluster scope of a single Argo Rollouts instance to a specific namespace. Note To prevent unintended privilege escalation, Red Hat OpenShift GitOps allows only one mode of Argo Rollout installation at a time. To switch between cluster-scoped and namespace-scoped Argo Rollouts installations, complete the following steps. 6.1. Configuring a namespace-scoped Argo Rollouts installation To configure a namespace-scoped instance of Argo Rollouts installation, complete the following steps. Prerequisites You are logged in to the Red Hat OpenShift GitOps cluster as an administrator. You have installed Red Hat OpenShift GitOps on your Red Hat OpenShift GitOps cluster. Procedure In the Administrator perspective of the web console, go to Administration CustomResourceDefinitions . Search for Subscription and click the Subscription CRD. Click the Instances tab and then click the openshift-gitops-operator subscription. Click the YAML tab and edit the YAML file. Specify the NAMESPACE_SCOPED_ARGO_ROLLOUTS environment variable, with the value set to true in the .spec.config.env property. Example of configuring the namespace-scoped Argo Rollouts installation apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator spec: # (...) config: env: - name: NAMESPACE_SCOPED_ARGO_ROLLOUTS value: 'true' 1 1 The value set to 'true' enables namespace-scoped installation. If the value is set to 'false' or not specified the installation defaults to cluster-scoped mode. Click Save . The Red Hat OpenShift GitOps Operator facilitates the reconciliation of the Argo Rollouts custom resource within a namespace-scoped installation. Verify that the Red Hat OpenShift GitOps Operator has enabled the namespace-scoped Argo Rollouts installation by viewing the logs of the GitOps container: In the Administrator perspective of the web console, go to Workloads Pods . Click the openshift-gitops-operator-controller-manager pod, and then click the Logs tab. Look for the following log statement: Running in namespaced-scoped mode . This statement indicates that the Red Hat OpenShift GitOps Operator has enabled the namespace-scoped Argo Rollouts installation. Create a RolloutManager resource to complete the namespace-scoped Argo Rollouts installation: Go to Operators Installed Operators Red Hat OpenShift GitOps , and click the RolloutManager tab. Click Create RolloutManager . Select YAML view and enter the following snippet: Example RolloutManager CR for a namespace-scoped Argo Rollouts installation apiVersion: argoproj.io/v1alpha1 kind: RolloutManager metadata: name: rollout-manager namespace: my-application 1 spec: namespaceScoped: true 1 Specify the name of the project where you want to install the namespace-scoped Argo Rollouts instance. Click Create . After the RolloutManager CR is created, Red Hat OpenShift GitOps begins to install the namespace-scoped Argo Rollouts instance into the selected namespace. Verify that the namespace-scoped installation is successful. In the RolloutManager tab, under the RolloutManagers section, ensure that the Status field of the RolloutManager instance is Phase: Available . Examine the following output in the YAML tab under the RolloutManagers section to ensure that the installation is successful: Example of namespace-scoped Argo Rollouts installation YAML file spec: namespaceScoped: true status: conditions: lastTransitionTime: '2024-07-10T14:20:5z` message: '' reason: Success status: 'True' 1 type: 'Reconciled' phase: Available rolloutController: Available 1 This status indicates that the namespace-scoped Argo Rollouts installation is enabled successfully. If you try to install a namespace-specific Argo Rollouts instance while a cluster-scoped installation already exists on the cluster, an error message is displayed: Example of an incorrect installation with an error message spec: namespaceScoped: true status: conditions: lastTransitionTime: '2024-07-10T14:10:7z` message: 'when Subscription has environment variable NAMESPACE_SCOPED_ARGO_ROLLOUTS set to False, there may not exist any namespace-scoped RolloutManagers: only a single cluster-scoped RolloutManager is supported' reason: InvalidRolloutManagerScope status: 'False' 1 type: 'Reconciled' phase: Failure rolloutController: Failure 1 This status indicates that the namespace-scoped Argo Rollouts installation is not enabled successfully. The installation defaults to cluster-scoped mode.
[ "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator spec: # (...) config: env: - name: NAMESPACE_SCOPED_ARGO_ROLLOUTS value: 'true' 1", "apiVersion: argoproj.io/v1alpha1 kind: RolloutManager metadata: name: rollout-manager namespace: my-application 1 spec: namespaceScoped: true", "spec: namespaceScoped: true status: conditions: lastTransitionTime: '2024-07-10T14:20:5z` message: '' reason: Success status: 'True' 1 type: 'Reconciled' phase: Available rolloutController: Available", "spec: namespaceScoped: true status: conditions: lastTransitionTime: '2024-07-10T14:10:7z` message: 'when Subscription has environment variable NAMESPACE_SCOPED_ARGO_ROLLOUTS set to False, there may not exist any namespace-scoped RolloutManagers: only a single cluster-scoped RolloutManager is supported' reason: InvalidRolloutManagerScope status: 'False' 1 type: 'Reconciled' phase: Failure rolloutController: Failure" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/argo_rollouts/enable-support-for-namespace-scoped-argo-rollouts-installation
Chapter 9. Networking
Chapter 9. Networking 9.1. Networking overview OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with Red Hat OpenShift Service on AWS networking and its ecosystem. Note You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. The following figure illustrates the typical network setup of OpenShift Virtualization. Other configurations are also possible. Figure 9.1. OpenShift Virtualization networking overview Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads. You can connect VMs to the default pod network and to any number of secondary networks. The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality. Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins. The default pod network is overlay-based, tunneled through the underlying machine network. The machine network can be defined over a selected set of network interface controllers (NICs). Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation. It is also possible to create virtual overlay networks for secondary networks. Important Connecting VMs directly to the underlay network is not supported on Red Hat OpenShift Service on AWS. Note Connecting VMs to user-defined networks with the layer2 topology is recommended on public clouds. Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network. 9.1.1. OpenShift Virtualization networking glossary The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. UserDefinedNetwork (UDN) A namespace-scoped CRD introduced by the user-defined network API that can be used to create a tenant network that isolates the tenant namespace from other namespaces. ClusterUserDefinedNetwork (CUDN) A cluster-scoped CRD introduced by the user-defined network API that cluster administrators can use to create a shared network across multiple namespaces. 9.1.2. Using the default pod network Connecting a virtual machine to the default pod network Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification. Exposing a virtual machine as a service You can expose a VM within the cluster or outside the cluster by creating a Service object. 9.1.3. Configuring a primary user-defined network Connecting a virtual machine to a primary user-defined network You can connect a virtual machine (VM) to a user-defined network (UDN) on the VM's primary interface. The primary user-defined network replaces the default pod network to connect pods and VMs in selected namespaces. Cluster administrators can configure a primary UserDefinedNetwork CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the ClusterUserDefinedNetwork CRD to create a shared OVN layer2 network across multiple namespaces. User-defined networks with the layer2 overlay topology are useful for VM workloads, and a good alternative to secondary networks in environments where physical network access is limited, such as the public cloud. The layer2 topology enables seamless migration of VMs without the need for Network Address Translation (NAT), and also provides persistent IP addresses that are preserved between reboots and during live migration. 9.1.4. Configuring VM secondary network interfaces You can connect a virtual machine to a secondary network by using an OVN-Kubernetes Container Network Interface (CNI) plugin. It is not required to specify the primary pod network in the VM specification when connecting to a secondary network interface. Connecting a virtual machine to an OVN-Kubernetes secondary network You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. OpenShift Virtualization supports the layer2 topology for OVN-Kubernetes. A layer2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes CNI plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD). Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification. Hot plugging secondary network interfaces You can add or remove secondary network interfaces without stopping your VM. OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the OVN-Kubernetes layer2 topology. Configuring and viewing IP addresses You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent. 9.1.5. Integrating with OpenShift Service Mesh Connecting a virtual machine to a service mesh OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines. 9.1.6. Managing MAC address pools Managing MAC address pools for network interfaces The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots. 9.1.7. Configuring SSH access Configuring SSH access to virtual machines You can configure SSH access to VMs by using the following methods: virtctl ssh command You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key. You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. virtctl port-forward command You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH. Service You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. Secondary network You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address. 9.2. Connecting a virtual machine to the default pod network You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode. Note Traffic passing through network interfaces to the default pod network is interrupted during live migration. 9.2.1. Configuring masquerade mode from the command line You can use masquerade mode to hide a virtual machine's outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge. Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file. Prerequisites The virtual machine must be configured to use DHCP to acquire IPv4 addresses. Procedure Edit the interfaces spec of your virtual machine configuration file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 # ... networks: - name: default pod: {} 1 Connect using masquerade mode. 2 Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . Note Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. Create the virtual machine: USD oc create -f <vm-name>.yaml 9.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init. The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120 . You can edit this value based on your network requirements. When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine. Prerequisites The Red Hat OpenShift Service on AWS cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack. Procedure In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 # ... networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4 1 Connect using masquerade mode. 2 Allows incoming traffic on port 80 to the virtual machine. 3 The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . 4 The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . Create the virtual machine in the namespace: USD oc create -f example-vm-ipv6.yaml Verification To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address: USD oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}" 9.2.3. About jumbo frames support When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes. The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways: libvirt : If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device. DHCP: If the guest DHCP client can read the MTU value from the DHCP server response. Note For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh or a similar tool. This is because the Windows DHCP client does not read the MTU value. 9.3. Connecting a virtual machine to a primary user-defined network You can connect a virtual machine (VM) to a user-defined network (UDN) on the VM's primary interface by using the Red Hat OpenShift Service on AWS web console or the CLI. The primary user-defined network replaces the default pod network in your specified namespace. Unlike the pod network, you can define the primary UDN per project, where each project can use its specific subnet and topology. OpenShift Virtualization supports the namespace-scoped UserDefinedNetwork and the cluster-scoped ClusterUserDefinedNetwork custom resource definitions (CRD). Cluster administrators can configure a primary UserDefinedNetwork CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the ClusterUserDefinedNetwork CRD to create a shared OVN network across multiple namespaces. Note You must add the k8s.ovn.org/primary-user-defined-network label when you create a namespace that is to be used with user-defined networks. With the layer 2 topology, OVN-Kubernetes creates an overlay network between nodes. You can use this overlay network to connect VMs on different nodes without having to configure any additional physical networking infrastructure. The layer 2 topology enables seamless migration of VMs without the need for Network Address Translation (NAT) because persistent IP addresses are preserved across cluster nodes during live migration. You must consider the following limitations before implementing a primary UDN: You cannot use the virtctl ssh command to configure SSH access to a VM. You cannot use the oc port-forward command to forward ports to a VM. You cannot use headless services to access a VM. You cannot define readiness and liveness probes to configure VM health checks. Note OpenShift Virtualization currently does not support secondary user-defined networks. 9.3.1. Creating a primary user-defined network by using the web console You can use the Red Hat OpenShift Service on AWS web console to create a primary namespace-scoped UserDefinedNetwork or a cluster-scoped ClusterUserDefinedNetwork CRD. The UDN serves as the default primary network for pods and VMs that you create in namespaces associated with the network. 9.3.1.1. Creating a namespace for user-defined networks by using the web console You can create a namespace to be used with primary user-defined networks (UDNs) by using the Red Hat OpenShift Service on AWS web console. Prerequisites Log in to the Red Hat OpenShift Service on AWS web console as a user with cluster-admin permissions. Procedure From the Administrator perspective, click Administration Namespaces . Click Create Namespace . In the Name field, specify a name for the namespace. The name must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character. In the Labels field, add the k8s.ovn.org/primary-user-defined-network label. Optional: If the namespace is to be used with an existing cluster-scoped UDN, add the appropriate labels as defined in the spec.namespaceSelector field in the ClusterUserDefinedNetwork custom resource. Optional: Specify a default network policy. Click Create to create the namespace. 9.3.1.2. Creating a primary namespace-scoped user-defined network by using the web console You can create an isolated primary network in your project namespace by creating a UserDefinedNetwork custom resource in the Red Hat OpenShift Service on AWS web console. Prerequisites You have access to the Red Hat OpenShift Service on AWS web console as a user with cluster-admin permissions. You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label. For more information, see "Creating a namespace for user-defined networks by using the web console". Procedure From the Administrator perspective, click Networking UserDefinedNetworks . Click Create UserDefinedNetwork . From the Project name list, select the namespace that you previously created. Specify a value in the Subnet field. Click Create . The user-defined network serves as the default primary network for pods and virtual machines that you create in this namespace. 9.3.1.3. Creating a primary cluster-scoped user-defined network by using the web console You can connect multiple namespaces to the same primary user-defined network (UDN) by creating a ClusterUserDefinedNetwork custom resource in the Red Hat OpenShift Service on AWS web console. Prerequisites You have access to the Red Hat OpenShift Service on AWS web console as a user with cluster-admin permissions. Procedure From the Administrator perspective, click Networking UserDefinedNetworks . From the Create list, select ClusterUserDefinedNetwork . In the Name field, specify a name for the cluster-scoped UDN. Specify a value in the Subnet field. In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to. Click Create . The cluster-scoped UDN serves as the default primary network for pods and virtual machines located in namespaces that contain the labels that you specified in step 5. steps Create namespaces that are associated with the cluster-scoped UDN 9.3.2. Creating a primary user-defined network by using the CLI You can create a primary UserDefinedNetwork or ClusterUserDefinedNetwork CRD by using the CLI. 9.3.2.1. Creating a namespace for user-defined networks by using the CLI You can create a namespace to be used with primary user-defined networks (UDNs) by using the CLI. Prerequisites You have access to the cluster as a user with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure Create a Namespace object as a YAML file similar to the following example: apiVersion: v1 kind: Namespace metadata: name: udn_namespace labels: k8s.ovn.org/primary-user-defined-network: "" 1 # ... 1 This label is required for the namespace to be associated with a UDN. If the namespace is to be used with an existing cluster UDN, you must also add the appropriate labels that are defined in the spec.namespaceSelector field of the ClusterUserDefinedNetwork custom resource. Apply the Namespace manifest by running the following command: oc apply -f <filename>.yaml 9.3.2.2. Creating a primary namespace-scoped user-defined network by using the CLI You can create an isolated primary network in your project namespace by using the CLI. You must use the OVN-Kubernetes layer 2 topology and enable persistent IP address allocation in the user-defined network (UDN) configuration to ensure VM live migration support. Prerequisites You have installed the OpenShift CLI ( oc ). You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label. Procedure Create a UserDefinedNetwork object to specify the custom network configuration: Example UserDefinedNetwork manifest apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-l2-net 1 namespace: my-namespace 2 spec: topology: Layer2 3 layer2: role: Primary 4 subnets: - "10.0.0.0/24" - "2001:db8::/60" ipam: lifecycle: Persistent 5 1 Specifies the name of the UserDefinedNetwork custom resource. 2 Specifies the namespace in which the VM is located. The namespace must have the k8s.ovn.org/primary-user-defined-network label. The namespace must not be default , an openshift-* namespace, or match any global namespaces that are defined by the Cluster Network Operator (CNO). 3 Specifies the topological configuration of the network. The required value is Layer2 . A Layer2 topology creates a logical switch that is shared by all nodes. 4 Specifies if the UDN is primary or secondary. OpenShift Virtualization only supports the Primary role. This means that the UDN acts as the primary network for the VM and all default traffic passes through this network. 5 Specifies that virtual workloads have consistent IP addresses across reboots and migration. The spec.layer2.subnets field is required when ipam.lifecycle: Persistent is specified. Apply the UserDefinedNetwork manifest by running the following command: USD oc apply -f --validate=true <filename>.yaml 9.3.2.3. Creating a primary cluster-scoped user-defined network by using the CLI You can connect multiple namespaces to the same primary user-defined network (UDN) to achieve native tenant isolation by using the CLI. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Create a ClusterUserDefinedNetwork object to specify the custom network configuration: Example ClusterUserDefinedNetwork manifest kind: ClusterUserDefinedNetwork metadata: name: cudn-l2-net 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name operator: In 4 values: ["red-namespace", "blue-namespace"] network: topology: Layer2 5 layer2: role: Primary 6 ipam: lifecycle: Persistent subnets: - 203.203.0.0/16 1 Specifies the name of the ClusterUserDefinedNetwork custom resource. 2 Specifies the set of namespaces that the cluster UDN applies to. The namespace selector must not point to default , an openshift-* namespace, or any global namespaces that are defined by the Cluster Network Operator (CNO). 3 Specifies the type of selector. In this example, the matchExpressions selector selects objects that have the label kubernetes.io/metadata.name with the value red-namespace or blue-namespace . 4 Specifies the type of operator. Possible values are In , NotIn , and Exists . 5 Specifies the topological configuration of the network. The required value is Layer2 . A Layer2 topology creates a logical switch that is shared by all nodes. 6 Specifies if the UDN is primary or secondary. OpenShift Virtualization only supports the Primary role. This means that the UDN acts as the primary network for the VM and all default traffic passes through this network. Apply the ClusterUserDefinedNetwork manifest by running the following command: USD oc apply -f --validate=true <filename>.yaml steps Create namespaces that are associated with the cluster-scoped UDN 9.3.3. Attaching a virtual machine to the primary user-defined network by using the CLI You can connect a virtual machine (VM) to the primary user-defined network (UDN) by requesting the pod network attachment, and configuring the interface binding. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Edit the VirtualMachine manifest to add the UDN interface details, as in the following example: Example VirtualMachine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: my-namespace 1 spec: template: spec: domain: devices: interfaces: - name: udn-l2-net 2 binding: name: l2bridge 3 # ... networks: - name: udn-l2-net 4 pod: {} # ... 1 The namespace in which the VM is located. This value must match the namespace in which the UDN is defined. 2 The name of the user-defined network interface. 3 The name of the binding plugin that is used to connect the interface to the VM. The required value is l2bridge . 4 The name of the network. This must match the value of the spec.template.spec.domain.devices.interfaces.name field. Apply the VirtualMachine manifest by running the following command: USD oc apply -f <filename>.yaml 9.4. Exposing a virtual machine by using a service You can expose a virtual machine within the cluster or outside the cluster by creating a Service object. 9.4.1. About services A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world. ClusterIP Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. ClusterIP is the default service type. NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For Red Hat OpenShift Service on AWS, you must use externalTrafficPolicy: Cluster when configuring a load-balancing service, to minimize the network downtime during live migration. 9.4.2. Dual-stack support If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object. The spec.ipFamilyPolicy field can be set to one of the following values: SingleStack The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range. PreferDualStack The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured. RequireDualStack This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack . The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges. You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values: [IPv4] [IPv6] [IPv4, IPv6] [IPv6, IPv4] 9.4.3. Creating a service by using the command line You can create a service and associate it with a virtual machine (VM) by using the command line. Prerequisites You configured the cluster network to support the service. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1 # ... 1 Add special: key to the spec.template.metadata.labels stanza. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000 1 Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest. 2 Specify ClusterIP , NodePort , or LoadBalancer . 3 Specifies a collection of network ports and protocols that you want to expose from the virtual machine. Save the Service manifest file. Create the service by running the following command: USD oc create -f example-service.yaml Restart the VM to apply the changes. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace 9.5. Connecting a virtual machine to an OVN-Kubernetes secondary network You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. OpenShift Virtualization supports the layer2 topology for OVN-Kubernetes. A layer2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD). Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification. 9.5.1. Creating an OVN-Kubernetes NAD You can create an OVN-Kubernetes network attachment definition (NAD) by using the Red Hat OpenShift Service on AWS web console or the CLI. Note Configuring IP address management (IPAM) by specifying the spec.config.ipam.subnet attribute in a network attachment definition for virtual machines is not supported. 9.5.1.1. Creating a NAD for layer 2 topology using the CLI You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Create a NetworkAttachmentDefinition object: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { "cniVersion": "0.3.1", 1 "name": "my-namespace-l2-network", 2 "type": "ovn-k8s-cni-overlay", 3 "topology":"layer2", 4 "mtu": 1300, 5 "netAttachDefName": "my-namespace/l2-network" 6 } 1 The CNI specification version. The required value is 0.3.1 . 2 The name of the network. This attribute is not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinition objects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces. 3 The name of the CNI plug-in to be configured. The required value is ovn-k8s-cni-overlay . 4 The topological configuration for the network. The required value is layer2 . 5 Optional: The maximum transmission unit (MTU) value. The default value is automatically set by the kernel. 6 The value of the namespace and name fields in the metadata stanza of the NetworkAttachmentDefinition object. Note The above example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address. Apply the manifest: USD oc apply -f <filename>.yaml 9.5.1.2. Creating a NAD for layer 2 topology by using the web console You can create a network attachment definition (NAD) that describes how to attach a pod to the layer 2 overlay network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. Procedure Go to Networking NetworkAttachmentDefinitions in the web console. Click Create Network Attachment Definition . The network attachment definition must be in the same namespace as the pod or virtual machine using it. Enter a unique Name and optional Description . Select OVN Kubernetes L2 overlay network from the Network Type list. Click Create . 9.5.2. Attaching a virtual machine to the OVN-Kubernetes secondary network You can attach a virtual machine (VM) to the OVN-Kubernetes secondary network interface by using the Red Hat OpenShift Service on AWS web console or the CLI. 9.5.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Edit the VirtualMachine manifest to add the OVN-Kubernetes secondary network interface details, as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: runStrategy: Always template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4 # ... 1 The name of the OVN-Kubernetes secondary interface. 2 The name of the network. This must match the value of the spec.template.spec.domain.devices.interfaces.name field. 3 The name of the NetworkAttachmentDefinition object. 4 Specifies the nodes on which the VM can be scheduled. The recommended node selector value is node-role.kubernetes.io/worker: '' . Apply the VirtualMachine manifest: USD oc apply -f <filename>.yaml Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 9.6. Hot plugging secondary network interfaces You can add or remove secondary network interfaces without stopping your virtual machine (VM). OpenShift Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver. 9.6.1. VirtIO limitations Each VirtIO interface uses one of the limited Peripheral Connect Interface (PCI) slots in the VM. There are a total of 32 slots available. The PCI slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. OpenShift Virtualization reserves up to four slots for hot plugging interfaces. This includes any existing plugged network interfaces. For example, if your VM has two existing plugged interfaces, you can hot plug two more network interfaces. Note The actual number of slots available for hot plugging also depends on the machine type. For example, the default PCI topology for the q35 machine type supports hot plugging one additional PCIe device. For more information on PCI topology and hot plug support, see the libvirt documentation . If you restart the VM after hot plugging an interface, that interface becomes part of the standard network interfaces. 9.6.2. Hot plugging a secondary network interface by using the CLI Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. Prerequisites A network attachment definition is configured in the same namespace as your VM. You have installed the virtctl tool. You have installed the OpenShift CLI ( oc ). Procedure If the VM to which you want to hot plug the network interface is not running, start it by using the following command: USD virtctl start <vm_name> -n <namespace> Use the following command to add the new network interface to the running VM. Editing the VM specification adds the new network interface to the VM and virtual machine instance (VMI) configuration but does not attach it to the running VM. USD oc edit vm <vm_name> Example VM configuration apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3 # ... 1 Specifies the name of the new network interface. 2 Specifies the name of the network. This must be the same as the name of the new network interface that you defined in the template.spec.domain.devices.interfaces list. 3 Specifies the name of the NetworkAttachmentDefinition object. To attach the network interface to the running VM, live migrate the VM by running the following command: USD virtctl migrate <vm_name> Verification Verify that the VM live migration is successful by using the following command: USD oc get VirtualMachineInstanceMigration -w Example output NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora Verify that the new interface is added to the VM by checking the VMI status: USD oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }" Example output [ { "infoSource": "domain, guest-agent", "interfaceName": "eth0", "ipAddress": "10.130.0.195", "ipAddresses": [ "10.130.0.195", "fd02:0:0:3::43c" ], "mac": "52:54:00:0e:ab:25", "name": "default", "queueCount": 1 }, { "infoSource": "domain, guest-agent, multus-status", "interfaceName": "eth1", "mac": "02:d8:b8:00:00:2a", "name": "bridge-interface", 1 "queueCount": 1 } ] 1 The hot plugged interface appears in the VMI status. 9.6.3. Hot unplugging a secondary network interface by using the CLI You can remove a secondary network interface from a running virtual machine (VM). Note Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) interfaces. Prerequisites Your VM must be running. The VM must be created on a cluster running OpenShift Virtualization 4.14 or later. The VM must have a bridge network interface attached. Procedure Edit the VM specification to hot unplug a secondary network interface. Setting the interface state to absent detaches the network interface from the guest, but the interface still exists in the pod. USD oc edit vm <vm_name> Example VM configuration apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name> # ... 1 Set the interface state to absent to detach it from the running VM. Removing the interface details from the VM specification does not hot unplug the secondary network interface. Remove the interface from the pod by migrating the VM: USD virtctl migrate <vm_name> 9.6.4. Additional resources Installing virtctl 9.7. Connecting a virtual machine to a service mesh OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4. 9.7.1. Adding a virtual machine to a service mesh To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true . Then expose your VM as a service to view your application in the mesh. Important To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090. Prerequisites You installed the Service Mesh Operators. You created the Service Mesh control plane. You added the VM project to the Service Mesh member roll. Procedure Edit the VM configuration file to add the sidecar.istio.io/inject: "true" annotation: Example configuration file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: "true" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk 1 The key/value pair (label) that must be matched to the service selector attribute. 2 The annotation to enable automatic sidecar injection. 3 The binding method (masquerade mode) for use with the default pod network. Apply the VM configuration: USD oc apply -f <vm_name>.yaml 1 1 The name of the virtual machine YAML file. Create a Service object to expose your VM to the service mesh. apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP 1 The service selector that determines the set of pods targeted by a service. This attribute corresponds to the spec.metadata.labels field in the VM configuration file. In the above example, the Service object named vm-istio targets TCP port 8080 on any pod with the label app=vm-istio . Create the service: USD oc create -f <service_name>.yaml 1 1 The name of the service YAML file. 9.7.2. Additional resources Installing the Service Mesh Operators Creating the Service Mesh control plane Adding projects to the Service Mesh member roll 9.8. Configuring a dedicated network for live migration You can configure a dedicated secondary network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 9.8.1. Configuring a dedicated secondary network for live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. Each node has at least two Network Interface Cards (NICs). The NICs for live migration are connected to the same VLAN. Procedure Create a NetworkAttachmentDefinition manifest according to the following example: Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 2 "mode": "bridge", "ipam": { "type": "whereabouts", 3 "range": "10.200.5.0/24" 4 } }' 1 Specify the name of the NetworkAttachmentDefinition object. 2 Specify the name of the NIC to be used for live migration. 3 Specify the name of the CNI plugin that provides the network for the NAD. 4 Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR: Example HyperConverged manifest apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ... 1 Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. USD oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 9.8.2. Selecting a dedicated network by using the web console You can select a dedicated network for live migration by using the Red Hat OpenShift Service on AWS web console. Prerequisites You configured a Multus network for live migration. You created a network attachment definition for the network. Procedure Navigate to Virtualization > Overview in the Red Hat OpenShift Service on AWS web console. Click the Settings tab and then click Live migration . Select the network from the Live migration network list. 9.8.3. Additional resources Configuring live migration limits and timeouts 9.9. Configuring and viewing IP addresses You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent. 9.9.1. Configuring IP addresses for virtual machines You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line. You can configure a dynamic IP address when you create a VM by using the command line. The IP address is provisioned with cloud-init. 9.9.1.1. Configuring an IP address when creating a virtual machine by using the command line You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. Note If the VM is connected to the pod network, the pod network interface is the default route unless you update it. Prerequisites The virtual machine is connected to a secondary network. You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine. Procedure Edit the spec.template.spec.volumes.cloudInitNoCloud.networkData stanza of the virtual machine configuration: To configure a dynamic IP address, specify the interface name and enable DHCP: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 1 Specify the interface name. To configure a static IP, specify the interface name and the IP address: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2 1 Specify the interface name. 2 Specify the static IP address. 9.9.2. Viewing IP addresses of virtual machines You can view the IP address of a VM by using the Red Hat OpenShift Service on AWS web console or the command line. The network information is collected by the QEMU guest agent. 9.9.2.1. Viewing the IP address of a virtual machine by using the web console You can view the IP address of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Note You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent. Procedure In the Red Hat OpenShift Service on AWS console, click Virtualization VirtualMachines from the side menu. Select a VM to open the VirtualMachine details page. Click the Details tab to view the IP address. 9.9.2.2. Viewing the IP address of a virtual machine by using the command line You can view the IP address of a virtual machine (VM) by using the command line. Note You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent. Procedure Obtain the virtual machine instance configuration by running the following command: USD oc describe vmi <vmi_name> Example output # ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa 9.9.3. Additional resources Installing the QEMU guest agent 9.10. Managing MAC address pools for network interfaces The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots. Note KubeMacPool does not handle virtual machine instances created independently from a virtual machine. 9.10.1. Managing KubeMacPool by using the command line You can disable and re-enable KubeMacPool by using the command line. KubeMacPool is enabled by default. Procedure To disable KubeMacPool in two namespaces, run the following command: USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore To re-enable KubeMacPool in two namespaces, run the following command: USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-
[ "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}", "oc create -f <vm-name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4", "oc create -f example-vm-ipv6.yaml", "oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"", "apiVersion: v1 kind: Namespace metadata: name: udn_namespace labels: k8s.ovn.org/primary-user-defined-network: \"\" 1", "apply -f <filename>.yaml", "apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-l2-net 1 namespace: my-namespace 2 spec: topology: Layer2 3 layer2: role: Primary 4 subnets: - \"10.0.0.0/24\" - \"2001:db8::/60\" ipam: lifecycle: Persistent 5", "oc apply -f --validate=true <filename>.yaml", "kind: ClusterUserDefinedNetwork metadata: name: cudn-l2-net 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name operator: In 4 values: [\"red-namespace\", \"blue-namespace\"] network: topology: Layer2 5 layer2: role: Primary 6 ipam: lifecycle: Persistent subnets: - 203.203.0.0/16", "oc apply -f --validate=true <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: my-namespace 1 spec: template: spec: domain: devices: interfaces: - name: udn-l2-net 2 binding: name: l2bridge 3 networks: - name: udn-l2-net 4 pod: {}", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"my-namespace-l2-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\":\"layer2\", 4 \"mtu\": 1300, 5 \"netAttachDefName\": \"my-namespace/l2-network\" 6 }", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: runStrategy: Always template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4", "oc apply -f <filename>.yaml", "virtctl start <vm_name> -n <namespace>", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3", "virtctl migrate <vm_name>", "oc get VirtualMachineInstanceMigration -w", "NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora", "oc get vmi vm-fedora -ojsonpath=\"{ @.status.interfaces }\"", "[ { \"infoSource\": \"domain, guest-agent\", \"interfaceName\": \"eth0\", \"ipAddress\": \"10.130.0.195\", \"ipAddresses\": [ \"10.130.0.195\", \"fd02:0:0:3::43c\" ], \"mac\": \"52:54:00:0e:ab:25\", \"name\": \"default\", \"queueCount\": 1 }, { \"infoSource\": \"domain, guest-agent, multus-status\", \"interfaceName\": \"eth1\", \"mac\": \"02:d8:b8:00:00:2a\", \"name\": \"bridge-interface\", 1 \"queueCount\": 1 } ]", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name>", "virtctl migrate <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk", "oc apply -f <vm_name>.yaml 1", "apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP", "oc create -f <service_name>.yaml 1", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2", "oc describe vmi <vmi_name>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/networking
Chapter 109. KafkaTopicSpec schema reference
Chapter 109. KafkaTopicSpec schema reference Used in: KafkaTopic Property Property type Description topicName string The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name. partitions integer The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning. When absent this will default to the broker configuration for num.partitions . replicas integer The number of replicas the topic should have. When absent this will default to the broker configuration for default.replication.factor . config map The topic configuration.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkatopicspec-reference
Chapter 1. Getting started using the RHEL web console
Chapter 1. Getting started using the RHEL web console Learn how to install the Red Hat Enterprise Linux 9 web console, how to add and manage remote hosts through its convenient graphical interface, and how to monitor the systems managed by the web console. 1.1. What is the RHEL web console The RHEL web console is a web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. The RHEL web console enables you to perform a wide range of administration tasks, including: Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions The RHEL web console uses the same system APIs as you would use in a terminal, and actions performed in a terminal are immediately reflected in the RHEL web console. You can monitor the logs of systems in the network environment, as well as their performance, displayed as graphs. In addition, you can change the settings directly in the web console or through the terminal. 1.2. Installing and enabling the web console To access the RHEL web console, first enable the cockpit.socket service. Red Hat Enterprise Linux 9 includes the web console installed by default in many installation variants. If this is not the case on your system, install the cockpit package before enabling the cockpit.socket service. Procedure If the web console is not installed by default on your installation variant, manually install the cockpit package: Enable and start the cockpit.socket service, which runs a web server: If the web console was not installed by default on your installation variant and you are using a custom firewall profile, add the cockpit service to firewalld to open port 9090 in the firewall: Verification To verify the installation and configuration, open the web console . 1.3. Logging in to the web console When the cockpit.socket service is running and the corresponding firewall port is open, you can log in to the web console in your browser for the first time. Prerequisites Use one of the following browsers to open the web console: Mozilla Firefox 52 and later Google Chrome 57 and later Microsoft Edge 16 and later System user account credentials The RHEL web console uses a specific pluggable authentication modules (PAM) stack at /etc/pam.d/cockpit . The default configuration allows logging in with the user name and password of any local account on the system. Port 9090 is open in your firewall. Procedure In your web browser, enter the following address to access the web console: Note This provides a web-console login on your local machine. If you want to log in to the web console of a remote system, see Section 1.5, "Connecting to the web console from a remote machine" If you use a self-signed certificate, the browser displays a warning. Check the certificate, and accept the security exception to proceed with the login. The console loads a certificate from the /etc/cockpit/ws-certs.d directory and uses the last file with a .cert extension in alphabetical order. To avoid having to grant security exceptions, install a certificate signed by a certificate authority (CA). In the login screen, enter your system user name and password. Click Log In . After successful authentication, the RHEL web console interface opens. Important To switch between limited and administrative access, click Administrative access or Limited access in the top panel of the web console page. You must provide your user password to gain administrative access. 1.4. Disabling basic authentication in the web console You can modify the behavior of an authentication scheme by modifying the cockpit.conf file. Use the none action to disable an authentication scheme and only allow authentication through GSSAPI and forms. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . Procedure Open or create the cockpit.conf file in the /etc/cockpit/ directory in a text editor of your preference, for example: Add the following text: Save the file. Restart the web console for changes to take effect. 1.5. Connecting to the web console from a remote machine You can connect to your web console interface from any client operating system and also from mobile phones or tablets. Prerequisites A device with a supported internet browser, such as: Mozilla Firefox 52 and later Google Chrome 57 and later Microsoft Edge 16 and later The RHEL 9 you want to access with an installed and accessible web console. For instructions, see Installing and enabling the web console . Procedure Open your web browser. Type the remote server's address in one of the following formats: With the server's host name: For example: With the server's IP address: For example: After the login interface opens, log in with your RHEL system credentials. 1.6. Connecting to the web console from a remote machine as a root user On new installations of RHEL 9.2 or later, the RHEL web console disallows root account logins by default for security reasons. You can allow the root login in the /etc/cockpit/disallowed-users file. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . Procedure Open the disallowed-users file in the /etc/cockpit/ directory in a text editor of your preference, for example: Edit the file and remove the line for the root user: Save the changes and quit the editor. Verification Log in to the web console as a root user. For details, see Logging in to the web console . 1.7. Logging in to the web console using a one-time password If your system is part of an Identity Management (IdM) domain with enabled one-time password (OTP) configuration, you can use an OTP to log in to the RHEL web console. Important It is possible to log in using a one-time password only if your system is part of an Identity Management (IdM) domain with enabled OTP configuration. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . An Identity Management server with enabled OTP configuration. A configured hardware or software device generating OTP tokens. Procedure Open the RHEL web console in your browser: Locally: https://localhost:PORT_NUMBER Remotely with the server hostname: https://example.com:PORT_NUMBER Remotely with the server IP address: https://EXAMPLE.SERVER.IP.ADDR:PORT_NUMBER If you use a self-signed certificate, the browser issues a warning. Check the certificate and accept the security exception to proceed with the login. The console loads a certificate from the /etc/cockpit/ws-certs.d directory and uses the last file with a .cert extension in alphabetical order. To avoid having to grant security exceptions, install a certificate signed by a certificate authority (CA). The Login window opens. In the Login window, enter your system user name and password. Generate a one-time password on your device. Enter the one-time password into a new field that appears in the web console interface after you confirm your password. Click Log in . Successful login takes you to the Overview page of the web console interface. 1.8. Adding a banner to the login page You can set the web console to show a content of a banner file on the login screen. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . Procedure Open the /etc/issue.cockpit file in a text editor of your preference: Add the content you want to display as the banner to the file, for example: You cannot include any macros in the file, but you can use line breaks and ASCII art. Save the file. Open the cockpit.conf file in the /etc/cockpit/ directory in a text editor of your preference, for example: Add the following text to the file: Save the file. Restart the web console for changes to take effect. Verification Open the web console login screen again to verify that the banner is now visible: 1.9. Configuring automatic idle lock in the web console You can enable the automatic idle lock and set the idle timeout for your system through the web console interface. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . Procedure Open the cockpit.conf file in the /etc/cockpit/ directory in a text editor of your preference, for example: Add the following text to the file: Substitute <X> with a number for a time period of your choice in minutes. Save the file. Restart the web console for changes to take effect. Verification Check if the session logs you out after a set period of time. 1.10. Changing the web console listening port By default, the RHEL web console communicates through TCP port 9090. You can change the port number by overriding the default socket settings. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . The firewalld service is running. Procedure Pick an unoccupied port, for example, <4488/tcp> , and instruct SELinux to allow the cockpit service to bind to that port: Note that a port can be used only by one service at a time, and thus an attempt to use an already occupied port implies the ValueError: Port already defined error message. Open the new port and close the former one in the firewall: Create an override file for the cockpit.socket service: In the following editor screen, which opens an empty override.conf file located in the /etc/systemd/system/cockpit.socket.d/ directory, change the default port for the web console from 9090 to the previously picked number by adding the following lines: Note that the first ListenStream= directive with an empty value is intentional. You can declare multiple ListenStream directives in a single socket unit and the empty value in the drop-in file resets the list and disables the default port 9090 from the original unit. Important Insert the code snippet between the lines starting with # Anything between here and # Lines below this . Otherwise, the system discards your changes. Save the changes by pressing Ctrl + O and Enter . Exit the editor by pressing Ctrl + X . Reload the changed configuration: Check that your configuration is working: Restart cockpit.socket : Verification Open your web browser, and access the web console on the updated port, for example: Additional resources firewall-cmd(1) , semanage(8) , systemd.unit(5) , and systemd.socket(5) man pages on your system
[ "dnf install cockpit", "systemctl enable --now cockpit.socket", "firewall-cmd --add-service=cockpit --permanent firewall-cmd --reload", "https://localhost:9090", "vi cockpit.conf", "[basic] action = none", "systemctl try-restart cockpit", "https:// <server.hostname.example.com> : <port-number>", "https://example.com:9090", "https:// <server.IP_address> : <port-number>", "https://192.0.2.2:9090", "vi /etc/cockpit/disallowed-users", "List of users which are not allowed to login to Cockpit root", "vi /etc/issue.cockpit", "This is an example banner for the RHEL web console login page.", "vi /etc/cockpit/cockpit.conf", "[Session] Banner=/etc/issue.cockpit", "systemctl try-restart cockpit", "vi /etc/cockpit/cockpit.conf", "[Session] IdleTimeout= <X>", "systemctl try-restart cockpit", "semanage port -a -t websm_port_t -p tcp <4488>", "firewall-cmd --service cockpit --permanent --add-port= <4488> /tcp firewall-cmd --service cockpit --permanent --remove-port=9090/tcp", "systemctl edit cockpit.socket", "[Socket] ListenStream= ListenStream= <4488>", "systemctl daemon-reload", "systemctl show cockpit.socket -p Listen Listen=[::]:4488 (Stream)", "systemctl restart cockpit.socket", "https://machine1.example.com:4488" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_systems_using_the_rhel_9_web_console/getting-started-with-the-rhel-9-web-console_system-management-using-the-rhel-9-web-console
Chapter 1. Overview of IdM and access control in RHEL
Chapter 1. Overview of IdM and access control in RHEL Learn how you can use Identity Management (IdM) to centralize identity management, enforce security controls, and comply with best practices and security policies. Explore common customer scenarios and solutions for IdM implementation in both Linux and Windows environments. 1.1. Introduction to IdM Identity Management (IdM) provides a centralized and unified way to manage identity stores, authentication, policies, and authorization policies in a Linux-based domain. The goal of IdM in Red Hat Enterprise Linux IdM significantly reduces the administrative overhead of managing different services individually and using different tools on different machines. IdM is one of the few centralized identity, policy, and authorization software solutions that support: Advanced features of Linux operating system environments Unifying large groups of Linux machines Native integration with Active Directory IdM creates a Linux-based and Linux-controlled domain: IdM builds on existing, native Linux tools and protocols. It has its own processes and configuration, but its underlying technologies are well-established on Linux systems and trusted by Linux administrators. IdM servers and clients are Red Hat Enterprise Linux machines. IdM clients can also be other Linux and UNIX distributions if they support standard protocols. A Windows client cannot be a member of the IdM domain but users logged into Windows systems managed by Active Directory (AD) can connect to Linux clients or access services managed by IdM. This is accomplished by establishing cross forest trust between AD and IdM domains. Managing identities and policies on multiple Linux servers Without IdM: Each server is administered separately. All passwords are saved on the local machines. The IT administrator manages users on every machine, sets authentication and authorization policies separately, and maintains local passwords. However, more often the users rely on other centralized solution, for example direct integration with AD. Systems can be directly integrated with AD using several different solutions: Legacy Linux tools (not recommended to use) Solution based on Samba winbind (recommended for specific use cases) Solution based on a third-party software (usually require a license from another vendor) Solution based on SSSD (native Linux and recommended for the majority of use cases) With IdM: The IT administrator can: Maintain the identities in one central place: the IdM server Apply policies uniformly to multiples of machines at the same time Set different access levels for users by using host-based access control, delegation, and other rules Centrally manage privilege escalation rules Define how home directories are mounted Enterprise SSO In case of IdM Enterprise, single sign-on (SSO) is implemented leveraging the Kerberos protocol. This protocol is popular in the infrastructure level and enables SSO with services such as SSH, LDAP, NFS, CUPS, or DNS. Web services using different web stacks (Apache, EAP, Django, and others) can also be enabled to use Kerberos for SSO. However, practice shows that using OpenID Connect or SAML based on SSO is more convenient for web applications. To bridge the two layers, it is recommended to deploy an Identity Provider (IdP) solution that would be able to convert Kerberos authentication into a OpenID Connect ticket or SAML assertion. Red Hat SSO technology based on the Keycloak open source project is an example of such an IdP. Without IdM: Users log in to the system and are prompted for a password every single time they access a service or application. These passwords might be different, and the users have to remember which credential to use for which application. With IdM: After users log in to the system, they can access multiple services and applications without being repeatedly asked for their credentials. This helps to: Improve usability Reduce the security risk of passwords being written down or stored insecurely Boost user productivity Managing a mixed Linux and Windows environment Without IdM: Windows systems are managed in an AD forest, but development, production, and other teams have many Linux systems. The Linux systems are excluded from the AD environment. With IdM: The IT administrator can: Manage the Linux systems using native Linux tools Integrate the Linux systems into the environments centrally managed by Active Directory, therefore preserving a centralized user store. Easily deploy new Linux systems at scale or as needed. Quickly react to business needs and make decisions related to management of the Linux infrastructure without dependency on other teams avoiding delays. Contrasting IdM with a standard LDAP directory A standard LDAP directory, such as Red Hat Directory Server, is a general-purpose directory: it can be customized to fit a broad range of use cases. Schema: a flexible schema that can be customized for a vast array of entries, such as users, machines, network entities, physical equipment, or buildings. Typically used as: a back-end directory to store data for other applications, such as business applications that provide services on the Internet. IdM has a specific purpose: managing internal, inside-the-enterprise identities as well as authentication and authorization policies that relate to these identities. Schema: a specific schema that defines a particular set of entries relevant to its purpose, such as entries for user or machine identities. Typically used as: the identity and authentication server to manage identities within the boundaries of an enterprise or a project. The underlying directory server technology is the same for both Red Hat Directory Server and IdM. However, IdM is optimized to manage identities inside the enterprise. This limits its general extensibility, but also brings certain benefits: simpler configuration, better automation of resource management, and increased efficiency in managing enterprise identities. Additional resources Identity Management or Red Hat Directory Server - Which One Should I Use? on the Red Hat Enterprise Linux Blog Standard protocols (Red Hat Knowledgebase) 1.2. Common IdM customer scenarios and their solutions Explore examples of common identity management and access control use cases both in Linux and Windows environments, and their solutions. Scenario 1 Situation You are a Windows administrator in your company. Apart from Windows systems, you also have several Linux systems to administer. As you cannot delegate control of any part of your environment to a Linux administrator, you must handle all security controls in Active Directory (AD). Solution Integrate your Linux hosts to AD directly . If you want sudo rules to be defined centrally in an LDAP server, you must implement a schema extension in the AD domain controller (DC). If you do not have permissions to implement this extension, consider installing Identity Management (IdM) - see Scenario 3 below. As IdM already contains the schema extension, you can manage sudo rules directly in IdM . Further advice if you are expecting to need more Linux skills in the future Connect with the Linux community to see how others manage identities: users, hosts, and services. Research best practices. Make yourself more familiar with Linux: Use the RHEL web console when at all possible. Use easy commands on the command-line whenever possible. Attend a Red Hat System Administration course. Scenario 2 Situation You are a Linux administrator in your company. Your Linux users require different levels of access to the company resources. You need tight, centralized access control of your Linux machines. Solution Install IdM and migrate your users to it. Further advice if you are expecting your company to scale up in the future After installing IdM, configure host-based access control and sudo rules . These are necessary to maintain security best practices of limited access and least privilege. To meet your security targets, develop a cohesive identity and access management (IAM) strategy that uses protocols to secure both infrastructure and application layers. Scenario 3 Situation You are a Linux administrator in your company and you must integrate your Linux systems with the company Windows servers. You want to remain the sole maintainer of access control to your Linux systems. Different users require different levels of access to the Linux systems but they all reside in AD. Solution As AD controls are not robust enough, you must configure access control to the Linux systems on the Linux side. Install IdM and establish an IdM-AD trust . Further advice to enhance the security of your environment After installing IdM, configure host-based access control and sudo rules . These are necessary to maintain security best practices of limited access and least privilege. To meet your security targets, develop a cohesive Identity and Access Management (IAM) strategy that uses protocols to secure both infrastructure and application layers. Scenario 4 Situation As a security administrator, you must manage identities and access across all of your environments, including all of your Red Hat products. You must manage all of your identities in one place, and maintain access controls across all of your platforms, clouds and products. Solution Integrate IdM, Red Hat Single Sign-On , Red Hat Satellite , Red Hat Ansible Automation Platform and other Red Hat products. Scenario 5 Situation As a security and system administrator in a Department of Defense (DoD) or Intelligence Community (IC) environment, you are required to use smart card or RSA authentication. You are required to use PIV certificates or RSA tokens. Solution Configure certificate mapping in IdM . Ensure that GSSAPI delegation is enabled if an IdM-AD trust is present. Configure the use of radius configuration in IdM for RSA tokens. Configure IdM servers and IdM clients for smart card authentication . Additional resources Use Ansible to automate your IdM tasks to reduce client configuration time and complexity and to reduce mistakes. 1.3. Introduction to IdM servers and clients The Identity Management (IdM) domain includes the following types of systems: IdM clients IdM clients are Red Hat Enterprise Linux systems enrolled with the servers and configured to use the IdM services on these servers. Clients interact with the IdM servers to access services provided by them. For example, clients use the Kerberos protocol to perform authentication and acquire tickets for enterprise single sign-on (SSO), use LDAP to get identity and policy information, and use DNS to detect where the servers and services are located and how to connect to them. IdM servers IdM servers are Red Hat Enterprise Linux systems that respond to identity, authentication, and authorization requests from IdM clients within an IdM domain. IdM servers are the central repositories for identity and policy information. They can also host any of the optional services used by domain members: Certificate authority (CA): This service is present in most IdM deployments. Key Recovery Authority (KRA) DNS Active Directory (AD) trust controller Active Directory (AD) trust agent IdM servers are also embedded IdM clients. As clients enrolled with themselves, the servers provide the same functionality as other clients. To provide services for large numbers of clients, as well as for redundancy and availability, IdM allows deployment on multiple IdM servers in a single domain. It is possible to deploy up to 60 servers. This is the maximum number of IdM servers, also called replicas, that is currently supported in the IdM domain. When creating a replica, IdM clones the configuration of the existing server. A replica shares with the initial server its core configuration, including internal information about users, systems, certificates, and configured policies. NOTE A replica and the server it was created from are functionally identical, except for the CA renewal and CRL publisher roles. Therefore, the term server and replica are used interchangeably in RHEL IdM documentation, depending on the context. However, different IdM servers can provide different services for the client, if so configured. Core components like Kerberos and LDAP are available on every server. Other services like CA, DNS, Trust Controller or Vault are optional. This means that different IdM servers can have distinct roles in the deployment. If your IdM topology contains an integrated CA, one server has the role of the Certificate revocation list (CRL) publisher server and one server has the role of the CA renewal server . By default, the first CA server installed fulfills these two roles, but you can assign these roles to separate servers. Warning The CA renewal server is critical for your IdM deployment because it is the only system in the domain responsible for tracking CA subsystem certificates and keys . For details about how to recover from a disaster affecting your IdM deployment, see Performing disaster recovery with Identity Management . NOTE All IdM servers (for clients, see Supported versions of RHEL for installing IdM clients ) must be running on the same major and minor version of RHEL. Do not spend more than several days applying z-stream updates or upgrading the IdM servers in your topology. For details about how to apply Z-stream fixes and upgrade your servers, see Updating IdM packages . For details about how to migrate to IdM on RHEL 9, see Migrating your IdM environment from RHEL 8 servers to RHEL 9 servers . 1.4. Supported versions of RHEL for installing IdM clients An Identity Management deployment in which IdM servers are running on the latest minor version of Red Hat Enterprise Linux 9 supports clients that are running on the latest minor versions of: RHEL 7 RHEL 8 RHEL 9 Note While other client systems, for example Ubuntu, can work with IdM 9 servers, Red Hat does not provide support for these clients. 1.5. IdM and access control in RHEL: Central vs. local In Red Hat Enterprise Linux, you can manage identities and access control policies using centralized tools for a whole domain of systems, or using local tools for a single system. Managing identities and policies on multiple Red Hat Enterprise Linux servers With Identity Management IdM, the IT administrator can: Maintain the identities and grouping mechanisms in one central place: the IdM server Centrally manage different types of credentials such as passwords, PKI certificates, OTP tokens, or SSH keys Apply policies uniformly to multiples of machines at the same time Manage POSIX and other attributes for external Active Directory users Set different access levels for users by using host-based access control, delegation, and other rules Centrally manage privilege escalation rules (sudo) and mandatory access control (SELinux user mapping) Maintain central PKI infrastructure and secrets store Define how home directories are mounted Without IdM: Each server is administered separately All passwords are saved on the local machines The IT administrator manages users on every machine, sets authentication and authorization policies separately, and maintains local passwords 1.6. IdM terminology Active Directory forest An Active Directory (AD) forest is a set of one or more domain trees which share a common global catalog, directory schema, logical structure, and directory configuration. The forest represents the security boundary within which users, computers, groups, and other objects are accessible. For more information, see the Microsoft document on Forests . Active Directory global catalog The global catalog is a feature of Active Directory (AD) that allows a domain controller to provide information about any object in the forest, regardless of whether the object is a member of the domain controller's domain. Domain controllers with the global catalog feature enabled are referred to as global catalog servers. The global catalog provides a searchable catalog of all objects in every domain in a multi-domain Active Directory Domain Services (AD DS). Active Directory security identifier A security identifier (SID) is a unique ID number assigned to an object in Active Directory, such as a user, group, or host. It is the functional equivalent of UIDs and GIDs in Linux. Ansible play Ansible plays are the building blocks of Ansible playbooks . The goal of a play is to map a group of hosts to some well-defined roles, represented by Ansible tasks. Ansible playbook An Ansible playbook is a file that contains one or more Ansible plays. For more information, see the official Ansible documentation about playbooks . Ansible task Ansible tasks are units of action in Ansible. An Ansible play can contain multiple tasks. The goal of each task is to execute a module, with very specific arguments. An Ansible task is a set of instructions to achieve a state defined, in its broad terms, by a specific Ansible role or module, and fine-tuned by the variables of that role or module. For more information, see the official Ansible tasks documentation . Apache web server The Apache HTTP Server, colloquially called Apache, is a free and open source cross-platform web server application, released under the terms of Apache License 2.0. Apache played a key role in the initial growth of the World Wide Web, and is currently the leading HTTP server. Its process name is httpd , which is short for HTTP daemon . Red Hat Enterprise Linux Identity Management (IdM) uses the Apache Web Server to display the IdM Web UI, and to coordinate communication between components, such as the Directory Server and the Certificate Authority. Certificate A certificate is an electronic document used to identify an individual, a server, a company, or other entity and to associate that identity with a public key. Such as a driver's license or passport, a certificate provides generally recognized proof of a person's identity. Public-key cryptography uses certificates to address the problem of impersonation. Certificate Authorities (CAs) in IdM An entity that issues digital certificates. In Red Hat Identity Management, the primary CA is ipa , the IdM CA. The ipa CA certificate is one of the following types: Self-signed. In this case, the ipa CA is the root CA. Externally signed. In this case, the ipa CA is subordinated to the external CA. In IdM, you can also create multiple sub-CAs . Sub-CAs are IdM CAs whose certificates are one of the following types: Signed by the ipa CA. Signed by any of the intermediate CAs between itself and ipa CA. The certificate of a sub-CA cannot be self-signed. See also Planning your CA services . Cross-forest trust A trust establishes an access relationship between two Kerberos realms, allowing users and services in one domain to access resources in another domain. With a cross-forest trust between an Active Directory (AD) forest root domain and an IdM domain, users from the AD forest domains can interact with Linux machines and services from the IdM domain. From the perspective of AD, Identity Management represents a separate AD forest with a single AD domain. For more information, see How the trust between IdM and AD works . Directory Server A Directory Server centralizes user identity and application information. It provides an operating system-independent, network-based registry for storing application settings, user profiles, group data, policies, and access control information. Each resource on the network is considered an object by the Directory Server. Information about a particular resource is stored as a collection of attributes associated with that resource or object. Red Hat Directory Server conforms to LDAP standards. DNS PTR records DNS pointer (PTR) records resolve an IP address of a host to a domain or host name. PTR records are the opposite of DNS A and AAAA records, which resolve host names to IP addresses. DNS PTR records enable reverse DNS lookups. PTR records are stored on the DNS server. DNS SRV records A DNS service (SRV) record defines the hostname, port number, transport protocol, priority and weight of a service available in a domain. You can use SRV records to locate IdM servers and replicas. Domain Controller (DC) A domain controller (DC) is a host that responds to security authentication requests within a domain and controls access to resources in that domain. IdM servers work as DCs for the IdM domain. A DC authenticates users, stores user account information and enforces security policy for a domain. When a user logs into a domain, the DC authenticates and validates their credentials and either allows or denies access. Fully qualified domain name A fully qualified domain name (FQDN) is a domain name that specifies the exact location of a host within the hierarchy of the Domain Name System (DNS). A device with the hostname myhost in the parent domain example.com has the FQDN myhost.example.com . The FQDN uniquely distinguishes the device from any other hosts called myhost in other domains. If you are installing an IdM client on host machine1 using DNS autodiscovery and your DNS records are correctly configured, the FQDN of machine1 is all you need. For more information, see Host name and DNS requirements for IdM . GSSAPI The Generic Security Service Application Program Interface (GSSAPI, or GSS-API) allows developers to abstract how their applications protect data that is sent to peer applications. Security-service vendors can provide GSSAPI implementations of common procedure calls as libraries with their security software. These libraries present a GSSAPI-compatible interface to application writers who can write their application to use only the vendor-independent GSSAPI. With this flexibility, developers do not have to tailor their security implementations to any particular platform, security mechanism, type of protection, or transport protocol. Kerberos is the dominant GSSAPI mechanism implementation, which allows Red Hat Enterprise Linux and Microsoft Windows Active Directory Kerberos implementations to be API compatible. Hidden replica A hidden replica is an IdM replica that has all services running and available, but its server roles are disabled, and clients cannot discover the replica because it has no SRV records in DNS. Hidden replicas are primarily designed for services such as backups, bulk importing and exporting, or actions that require shutting down IdM services. Since no clients use a hidden replica, administrators can temporarily shut down the services on this host without affecting any clients. For more information, see The hidden replica mode . HTTP server See Web server . ID mapping SSSD can use the SID of an AD user to algorithmically generate POSIX IDs in a process called ID mapping . ID mapping creates a map between SIDs in AD and IDs on Linux. When SSSD detects a new AD domain, it assigns a range of available IDs to the new domain. Therefore, each AD domain has the same ID range on every SSSD client machine. When an AD user logs in to an SSSD client machine for the first time, SSSD creates an entry for the user in the SSSD cache, including a UID based on the user's SID and the ID range for that domain. Because the IDs for an AD user are generated in a consistent way from the same SID, the user has the same UID and GID when logging in to any Red Hat Enterprise Linux system. ID ranges An ID range is a range of ID numbers assigned to the IdM topology or a specific replica. You can use ID ranges to specify the valid range of UIDs and GIDs for new users, hosts and groups. ID ranges are used to avoid ID number conflicts. There are two distinct types of ID ranges in IdM: IdM ID range Use this ID range to define the UIDs and GIDs for users and groups in the whole IdM topology. Installing the first IdM server creates the IdM ID range. You cannot modify the IdM ID range after creating it. However, you can create an additional IdM ID range, for example when the original one nears depletion. Distributed Numeric Assignment (DNA) ID range Use this ID range to define the UIDs and GIDs a replica uses when creating new users. Adding a new user or host entry to an IdM replica for the first time assigns a DNA ID range to that replica. An administrator can modify the DNA ID range, but the new definition must fit within an existing IdM ID range. Note that the IdM range and the DNA range match, but they are not interconnected. If you change one range, ensure you change the other to match. For more information, see ID ranges . ID views ID views enable you to specify new values for POSIX user or group attributes, and to define on which client host or hosts the new values will apply. For example, you can use ID views to: Define different attribute values for different environments. Replace a previously generated attribute value with a different value. In an IdM-AD trust setup, the Default Trust View is an ID view applied to AD users and groups. Using the Default Trust View , you can define custom POSIX attributes for AD users and groups, therefore overriding the values defined in AD. For more information, see Using an ID view to override a user attribute value on an IdM client . IdM CA server An IdM server on which the IdM certificate authority (CA) service is installed and running. Alternative names: CA server IdM deployment A term that refers to the entirety of your IdM installation. You can describe your IdM deployment by answering the following questions: Is your IdM deployment a testing deployment or production deployment? How many IdM servers do you have? Does your IdM deployment contain an integrated CA ? If it does, is the integrated CA self-signed or externally signed? If it does, on which servers is the CA role available? On which servers is the KRA role available? Does your IdM deployment contain an integrated DNS ? If it does, on which servers is the DNS role available? Is your IdM deployment in a trust agreement with an AD forest ? If it is, on which servers is the AD trust controller or AD trust agent role available? IdM server and replicas To install the first server in an IdM deployment, you must use the ipa-server-install command. Administrators can then use the ipa-replica-install command to install replicas in addition to the first server that was installed. By default, installing a replica creates a replication agreement with the IdM server from which it was created, enabling receiving and sending updates to the rest of IdM. There is no functional difference between the first server that was installed and a replica. Both are fully functional read/write IdM servers . Deprecated names: master server IdM CA renewal server If your IdM topology contains an integrated certificate authority (CA), one server has the unique role of the CA renewal server . This server maintains and renews IdM system certificates. By default, the first CA server you install fulfills this role, but you can configure any CA server to be the CA renewal server. In a deployment without integrated CA, there is no CA renewal server. Deprecated names: master CA IdM CRL publisher server If your IdM topology contains an integrated certificate authority (CA), one server has the unique role of the Certificate revocation list (CRL) publisher server . This server is responsible for maintaining the CRL. By default, the server that fulfills the CA renewal server role also fulfills this role, but you can configure any CA server to be the CRL publisher server. In a deployment without integrated CA, there is no CRL publisher server. IdM topology A term that refers to the structure of your IdM solution , especially the replication agreements between and within individual data centers and clusters. Kerberos authentication indicators Authentication indicators are attached to Kerberos tickets and represent the initial authentication method used to acquire a ticket: otp for two-factor authentication (password + One-Time Password) radius for Remote Authentication Dial-In User Service (RADIUS) authentication (commonly for 802.1x authentication) pkinit for Public Key Cryptography for Initial Authentication in Kerberos (PKINIT), smart card, or certificate authentication hardened for passwords hardened against brute-force attempts For more information, see Kerberos authentication indicators . Kerberos keytab While a password is the default authentication method for a user, keytabs are the default authentication method for hosts and services. A Kerberos keytab is a file that contains a list of Kerberos principals and their associated encryption keys, so a service can retrieve its own Kerberos key and verify a user's identity. For example, every IdM client has an /etc/krb5.keytab file that stores information about the host principal, which represents the client machine in the Kerberos realm. Kerberos principal Unique Kerberos principals identify each user, service, and host in a Kerberos realm: Entity Naming convention Example Users identifier@REALM [email protected] Services service/fully-qualified-hostname@REALM http/[email protected] Hosts host/fully-qualified-hostname@REALM host/[email protected] Kerberos protocol Kerberos is a network authentication protocol that provides strong authentication for client and server applications by using secret-key cryptography. IdM and Active Directory use Kerberos for authenticating users, hosts and services. Kerberos realm A Kerberos realm encompasses all the principals managed by a Kerberos Key Distribution Center (KDC). In an IdM deployment, the Kerberos realm includes all IdM users, hosts, and services. Kerberos ticket policies The Kerberos Key Distribution Center (KDC) enforces ticket access control through connection policies, and manages the duration of Kerberos tickets through ticket lifecycle policies. For example, the default global ticket lifetime is one day, and the default global maximum renewal age is one week. For more information, see IdM Kerberos ticket policy types . Key Distribution Center (KDC) The Kerberos Key Distribution Center (KDC) is a service that acts as the central, trusted authority that manages Kerberos credential information. The KDC issues Kerberos tickets and ensures the authenticity of data originating from entities within the IdM network. For more information, see The role of the IdM KDC . LDAP The Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, application protocol for accessing and maintaining distributed directory information services over a network. Part of this specification is a directory information tree (DIT), which represents data in a hierarchical tree-like structure consisting of the Distinguished Names (DNs) of directory service entries. LDAP is a "lightweight" version of the Directory Access Protocol (DAP) described by the ISO X.500 standard for directory services in a network. Lightweight sub-CA In IdM, a lightweight sub-CA is a certificate authority (CA) whose certificate is signed by an IdM root CA or one of the CAs that are subordinate to it. A lightweight sub-CA issues certificates only for a specific purpose, for example to secure a VPN or HTTP connection. For more information, see Restricting an application to trust only a subset of certificates . Password policy A password policy is a set of conditions that the passwords of a particular IdM user group must meet. The conditions can include the following parameters: The length of the password The number of character classes used The maximum lifetime of a password. For more information, see What is a password policy . POSIX attributes POSIX attributes are user attributes for maintaining compatibility between operating systems. In a Red Hat Enterprise Linux Identity Management environment, POSIX attributes for users include: cn , the user's name uid , the account name (login) uidNumber , a user number (UID) gidNumber , the primary group number (GID) homeDirectory , the user's home directory In a Red Hat Enterprise Linux Identity Management environment, POSIX attributes for groups include: cn , the group's name gidNumber , the group number (GID) These attributes identify users and groups as separate entities. Replication agreement A replication agreement is an agreement between two IdM servers in the same IdM deployment. The replication agreement ensures that the data and configuration is continuously replicated between the two servers. IdM uses two types of replication agreements: domain replication agreements, which replicate identity information, and certificate replication agreements, which replicate certificate information. For more information, see: Replication agreements Determining the appropriate number of replicas Connecting the replicas in a topology Replica topology examples Smart card A smart card is a removable device or card used to control access to a resource. They can be plastic credit card-sized cards with an embedded integrated circuit (IC) chip, small USB devices such as a Yubikey, or other similar devices. Smart cards can provide authentication by allowing users to connect a smart card to a host computer, and software on that host computer interacts with key material stored on the smart card to authenticate the user. SSSD The System Security Services Daemon (SSSD) is a system service that manages user authentication and user authorization on a RHEL host. SSSD optionally keeps a cache of user identities and credentials retrieved from remote providers for offline authentication. For more information, see Understanding SSSD and its benefits . SSSD backend An SSSD backend, often also called a data provider, is an SSSD child process that manages and creates the SSSD cache. This process communicates with an LDAP server, performs different lookup queries and stores the results in the cache. It also performs online authentication against LDAP or Kerberos and applies access and password policy to the user that is logging in. Ticket-granting ticket (TGT) After authenticating to a Kerberos Key Distribution Center (KDC), a user receives a ticket-granting ticket (TGT), which is a temporary set of credentials that can be used to request access tickets to other services, such as websites and email. Using a TGT to request further access provides the user with a Single Sign-On experience, as the user only needs to authenticate once to access multiple services. TGTs are renewable, and Kerberos ticket policies determine ticket renewal limits and access control. For more information, see Managing Kerberos ticket policies . Web server A web server is computer software and underlying hardware that accepts requests for web content, such as pages, images, or applications. A user agent, such as a web browser, requests a specific resource using HTTP, the network protocol used to distribute web content, or its secure variant HTTPS. The web server responds with the content of that resource or an error message. The web server can also accept and store resources sent from the user agent. Red Hat Enterprise Linux Identity Management (IdM) uses the Apache Web Server to display the IdM Web UI, and to coordinate communication between components, such as the Directory Server and the Certificate Authority (CA). See Apache web server . Additional Glossaries If you are unable to find an Identity Management term in this glossary, see the Directory Server and Certificate System glossaries: Directory Server 11 Glossary Certificate System 9 Glossary
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/planning_identity_management/overview-of-identity-management-and-access-control-planning-identity-management
Autoscale APIs
Autoscale APIs OpenShift Container Platform 4.16 Reference guide for autoscale APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/autoscale_apis/index
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on AWS cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 3.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 3.2. Scaling out storage capacity on a AWS cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 3.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While we recommend adding nodes in the multiple of three, you still get the flexibility of adding one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 3.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.1.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up a cluster .
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/scaling_storage/scaling_storage_capacity_of_aws_openshift_data_foundation_cluster
Chapter 14. Networking
Chapter 14. Networking Support for the libnftnl and nftables packages The nftables and libnftl packages, previously available as a Technology Preview, are now supported. The nftables packages provide a packet-filtering tool, with numerous improvements in convenience, features, and performance over packet-filtering tools. It is the designated successor to the iptables , ip6tables , arptables , and ebtables utilities. The libnftnl packages provide a library for low-level interaction with nftables Netlink API over the libmnl library. (BZ#1332585) ECMP fib_multipath_hash_policy support added to the kernel for IPv4 packets This update adds support for Equal-cost multi-path routing (ECMP) hash policy choice using fib_multipath_hash_policy , a new sysctl setting that controls which hash policy to use for multipath routes. When fib_multipath_hash_policy is set to 1 , the kernel performs L4 hash , which is a multipath hash for IPv4 packets according to a 5-tuple (source IP, source port, destination IP, destination port, IP protocol type) set of values. When fib_multipath_hash_policy is set to 0 (default), only L3 hash is used (the source and destination IP addresses). Note that if you enable fib_multipath_hash_policy , the Internet Control Message Protocol (ICMP) error packets are not hashed according to the inner packet headers. This is a problem for anycast services as the ICMP packet can be delivered to the incorrect host. (BZ#1511351) Support for hardware time stamping on VLAN interfaces This update adds hardware time stamping on VLAN interfaces (driver dp83640 is excluded). This allows applications, such as linuxptp , to enable hardware time stamping. (BZ#1520356) Support for specifying speed and duplex 802-3-ethernet properties when 802-3-ethernet.auto-negotiation is enabled Previously, when 802-3-ethernet.auto-negotiation was enabled on an Ethernet connection, all the speed and duplex modes supported by the Network Interface Card (NIC) were advertised. The only option to enforce a specific speed and duplex mode was to disable 802-3-ethernet.auto-negotiation and set 802-3-ethernet.speed and 802-3-ethernet.duplex properties. This was not correct because the 1000BASE-T and 10GBASE-T Ethernet standards require auto-negotiation to be always enabled. With this update, you can enable a specific speed and duplex when auto-negotiation is enabled. (BZ#1487477) Support for changing the DUID for IPv6 DHCP connections With this update, users can configure the DHCP Unique Identifier (DUID) in NetworkManager to get an IPv6 address from a Dynamic Host Configuration Protocol (DHCP) server. As a result, users can now specify the DUID for DHCPv6 connections using the new property, ipv6.dhcp-duid . For more details on values set for ipv6.dhcp-duid , see the nm-settings(5) man page. (BZ#1414093) ipset rebased to Linux kernel version 4.17 The ipset kernel component has been upgraded to upstream Linux kernel version 4.17 which provides a number of enhancements and bug fixes over the version. Notable changes include: The following ipset types are now supported: hash:net,net hash:net,port,net hash:ip,mark hash:mac hash:ip,mac (BZ# 1557599 ) ipset (userspace) rebased to version 6.38 The ipset (userspace) package has been upgraded to upstream version 6.38, which provides a number of bug fixes and enhancements over the version. Notable changes include: The userspace ipset is now aligned to the Red Hat Enterprise Linux (RHEL) kernel ipset implementation in terms of supported ipset types A new type of set, hash:ipmac , is now supported (BZ# 1557600 ) firewalld rebased to version 0.5.3 The firewalld service daemon has been upgraded to upstream version 0.5.3, which provides a number of bug fixes and enhancements over the version. Notable changes include: Added the --check-config option to verify sanity of configuration files. Generated interfaces such as docker0 are now correctly re-added to zones after firewalld restarts. A new IP set type, hash:mac , is now supported. (BZ# 1554993 ) The ipset comment extension is now supported This update adds the ipset comment extension. This enables you to add entries with a comment. For more information, see the ipset (8) man page. (BZ# 1496859 ) radvd rebased to version 2.17 The router advertisement daemon (radvd) has been upgraded to version 2.17. The most notable change is that now radvd supports the selection of router advertisements source address. As a result, connection tracking no longer fails when the router's address is moved between hosts or firewalls. (BZ# 1475983 ) The default version for SMB now is auto-negotiated to the highest supported versions, SMB2 or SMB3 With this update, the default version of the Server Message Block (SMB) protocol has been changed from SMB1 to be auto-negotiated to the highest supported versions SMB2 or SMB3. Users can still choose to explicitly mount with the less secure SMB1 dialect (for old servers) by adding the vers=1.0 option on the Common Internet File System (CIFS) mount. Note that SMB2 or SMB3 do not support Unix Extensions. Users that depend on Unix Extensions need to review the mount options and ensure that vers=1.0 is used. (BZ#1471950) position in an nftables add or insert rule is replaced by handle and index With this update of the nftables packages, the position parameter in an add or insert rule has been deprecated and replaced by the handle and index arguments. This syntax is more consistent with the replace and delete commands. (BZ# 1571968 ) New features in net-snmp The net-snmp package in Red Hat Enterprise Linux 7 has been extended with the following new features: net-snmp now supports monitoring disks of ZFS file system. net-snmp now supports monitoring disks of ASM Cluster (AC) file system. (BZ# 1533943 , BZ#1564400) firewalld-cmd --check-config now checks the validity of XML configuration files This update introduces the --check-config option for the firewall-cmd and firewall-offline-cmd commands. The new option checks a user configuration of the firewalld daemon in XML files. The verification script reports syntax errors in custom rule definitions if any. (BZ# 1477771 ) Each IP set is saved and restored from an individual file With this update, when the ipset `systemd` service is used, each IP set is saved in its own file in the /etc/sysconfig/ipset.d/ directory. When the ipset service loads the ipset configuration, these files are also restored from each corresponding set. This feature provides easier maintenance and configuration of single sets. Note that using one single file containing all configured sets in /etc/sysconfig/ipset is still possible. However, if the ipset service is configured to save files on the stop action, or when the save operation is explicitly invoked, this legacy file will be removed, and the contents of all configured sets will be split into different files in /etc/sysconfig/ipset.d/ . (BZ# 1440741 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_networking
Chapter 11. Multicloud Object Gateway
Chapter 11. Multicloud Object Gateway 11.1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. 11.2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key in two ways: Section 11.2.1, "Accessing the Multicloud Object Gateway from the terminal" Section 11.2.2, "Accessing the Multicloud Object Gateway from the MCG command-line interface" For example: Accessing the MCG bucket(s) using the virtual-hosted style If the client application tries to access https:// <bucket-name> .s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG bucket(s) using the virtual-hosted style. 11.2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint Note The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour. 11.2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You now have the relevant endpoint, access key, and secret access key in order to connect to your applications. For example: If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: s :leveloffset: +1
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "oc describe noobaa -n openshift-storage", "Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa status -n openshift-storage", "INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] βœ… Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] βœ… Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] βœ… Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] βœ… Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] βœ… Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] βœ… Exists: Namespace \"openshift-storage\" INFO[0004] βœ… Exists: ServiceAccount \"noobaa\" INFO[0005] βœ… Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] βœ… Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] βœ… Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] βœ… Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] βœ… Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] βœ… Exists: NooBaa \"noobaa\" INFO[0007] βœ… Exists: StatefulSet \"noobaa-core\" INFO[0007] βœ… Exists: Service \"noobaa-mgmt\" INFO[0008] βœ… Exists: Service \"s3\" INFO[0008] βœ… Exists: Secret \"noobaa-server\" INFO[0008] βœ… Exists: Secret \"noobaa-operator\" INFO[0008] βœ… Exists: Secret \"noobaa-admin\" INFO[0009] βœ… Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] βœ… Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] βœ… (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] βœ… (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] βœ… (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] βœ… (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] βœ… (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] βœ… (Optional) Exists: Route \"s3\" INFO[0011] βœ… Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] βœ… System Phase is \"Ready\" INFO[0011] βœ… Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.", "AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/multicloud-object-gateway_rhodf
26.2. Authentication
26.2. Authentication The Authentication tab allows for the configuration of network authentication methods. To enable an option, click the empty checkbox beside it. To disable an option, click the checkbox beside it to clear the checkbox. Figure 26.2. Authentication The following explains what each option configures: Enable Kerberos Support - Select this option to enable Kerberos authentication. Click the Configure Kerberos button to configure: Realm - Configure the realm for the Kerberos server. The realm is the network that uses Kerberos, composed of one or more KDCs and a potentially large number of clients. KDC - Define the Key Distribution Center (KDC), which is the server that issues Kerberos tickets. Admin Servers - Specify the administration server(s) running kadmind . The krb5-libs and krb5-workstation packages must be installed for this option to work. Refer to the Reference Guide for more information on Kerberos. Enable LDAP Support - Select this option to have standard PAM-enabled applications use LDAP for authentication. Click the Configure LDAP button to specify the following: Use TLS to encrypt connections - Use Transport Layer Security to encrypt passwords sent to the LDAP server. LDAP Search Base DN - Retrieve user information by its Distinguished Name (DN). LDAP Server - Specify the IP address of the LDAP server. The openldap-clients package must be installed for this option to work. Refer to the Reference Guide for more information about LDAP. Use Shadow Passwords - Select this option to store passwords in shadow password format in the /etc/shadow file instead of /etc/passwd . Shadow passwords are enabled by default during installation and are highly recommended to increase the security of the system. The shadow-utils package must be installed for this option to work. For more information about shadow passwords, refer to the Users and Groups chapter in the Reference Guide . Enable SMB Support - This option configures PAM to use an SMB server to authenticate users. Click the Configure SMB button to specify: Workgroup - Specify the SMB workgroup to use. Domain Controllers - Specify the SMB domain controllers to use. Winbind - Select this option to configure the system to connect to a Windows Active Directory or a Windows domain controller. User information can be accessed, as well as server authentication options can be configured. Use MD5 Passwords - Select this option to enable MD5 passwords, which allows passwords to be up to 256 characters instead of eight characters or less. It is selected by default during installation and is highly recommended for increased security.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/authentication_configuration-authentication