title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 7. Rule Audit | Chapter 7. Rule Audit Rule audit allows the auditing of rules which have been triggered by all the rules that were activated at some point. The Rule Audit list view shows you a list of every time an event came in that matched a condition within a rulebook and triggered an action. The list shows you rules within your rulebook and each heading matches up to a rule that has been executed. 7.1. Viewing rule audit details From the Rule Audit list view you can check the event that triggered specific actions. Procedure From the navigation panel select Rule Audit . Select the desired rule, this brings you to the Details tab. From here you can view when it was created, when it was last fired, and the rulebook activation that it corresponds to. 7.2. Viewing rule audit events Procedure From the navigation panel select Rule Audit . Select the desired rule, this brings you to the Details tab. To view all the events that triggered an action, select the Events tab. This shows you the event that triggered actions. Select an event to view the Event log , along with the Source type and Timestamp . 7.3. Viewing rule audit actions Procedure From the navigation panel select Rule Audit . Select the desired rule, this brings you to the Actions tab. From here you can view executed actions that were taken. Some actions are linked out to automation controller where you can view the output. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/event-driven_ansible_controller_user_guide/eda-rule-audit |
2.5.4. SELinux: Avoid SELinux on GFS2 | 2.5.4. SELinux: Avoid SELinux on GFS2 Security Enhanced Linux (SELinux) is highly recommended for security reasons in most situations, but it is not supported for use with GFS2. SELinux stores information using extended attributes about every file system object, and SELinux labels on GFS2 file systems can get out of sync between cluster nodes because of how they are cached in memory. When mounting a GFS2 file system, you must ensure that SELinux will not attempt to read the seclabel element on each file system object by using one of the context options as described on the mount (8) man page; SELinux will assume that all content in the file system is labeled with the seclabel element provided in the context mount options. This will also speed up processing as it avoids another disk read of the extended attribute block that could contain seclabel elements. For example, on a system with SELinux in enforcing mode, you can use the following mount command to mount the GFS2 file system if the file system is going to contain Apache content. This label will apply to the entire file system; it remains in memory and is not written to disk. If you are not sure whether the file system will contain Apache content, you can use the labels public_content_rw_t or public_content_t , or you could define a new label altogether and define a policy around it. Note that in a Pacemaker cluster you should always use Pacemaker to manage a GFS2 file system. You can specify the mount options when you create a GFS2 file system resource, as described in Chapter 6, Configuring a GFS2 File System in a Pacemaker Cluster . | [
"mount -t gfs2 -o context=system_u:object_r:httpd_sys_content_t:s0 /dev/mapper/xyz/mnt/gfs2",
"mount -t gfs2 -o context=system_u:object_r:httpd_sys_content_t:s0 /dev/mapper/xyz/mnt/gfs2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s2-selinux-gfs2-gfs2 |
Preface | Preface These release notes list new features, features in technology preview, known issues, and issues fixed in Red Hat Decision Manager 7.13. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/pr01 |
Chapter 4. Configuring user workload monitoring | Chapter 4. Configuring user workload monitoring 4.1. Preparing to configure the user workload monitoring stack This section explains which user-defined monitoring components can be configured, how to enable user workload monitoring, and how to prepare for configuring the user workload monitoring stack. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration. The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources. 4.1.1. Configurable monitoring components This table shows the monitoring components you can configure and the keys used to specify the components in the user-workload-monitoring-config config map. Table 4.1. Configurable monitoring components for user-defined projects Component user-workload-monitoring-config config map key Prometheus Operator prometheusOperator Prometheus prometheus Alertmanager alertmanager Thanos Ruler thanosRuler Warning Different configuration changes to the ConfigMap object result in different outcomes: The pods are not redeployed. Therefore, there is no service outage. The affected pods are redeployed: For single-node clusters, this results in temporary service outage. For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available. Configuring and resizing a persistent volume always results in a service outage, regardless of high availability. Each procedure that requires a change in the config map includes its expected outcome. 4.1.2. Enabling monitoring for user-defined projects In OpenShift Container Platform, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects. Note Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. 4.1.2.1. Enabling monitoring for user-defined projects Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important You must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Note You must have access to the cluster as a user with the cluster-admin cluster role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have created the cluster-monitoring-config ConfigMap object. You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It might sometimes take a while for these components to redeploy. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserWorkload: true under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 1 When set to true , the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically. Note If you enable monitoring for user-defined projects, the user-workload-monitoring-config ConfigMap object is created by default. Verify that the prometheus-operator , prometheus-user-workload , and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. It might take a short while for the pods to start: USD oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h Additional resources User workload monitoring first steps 4.1.2.2. Granting users permission to configure monitoring for user-defined projects As a cluster administrator, you can assign the user-workload-monitoring-config-edit role to a user. This grants permission to configure and manage monitoring for user-defined projects without giving them permission to configure and manage core OpenShift Container Platform monitoring components. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring adm policy add-role-to-user \ user-workload-monitoring-config-edit <user> \ --role-namespace openshift-user-workload-monitoring Verify that the user is correctly assigned to the user-workload-monitoring-config-edit role by displaying the related role binding: USD oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring Example command USD oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring Example output Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1 1 In this example, user1 is assigned to the user-workload-monitoring-config-edit role. 4.1.3. Enabling alert routing for user-defined projects In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects. This process consists of the following steps: Enable alert routing for user-defined projects: Use the default platform Alertmanager instance. Use a separate Alertmanager instance only for user-defined projects. Grant users permission to configure alert routing for user-defined projects. After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects. Additional resources Understanding alert routing for user-defined projects 4.1.3.1. Enabling the platform Alertmanager instance for user-defined alert routing You can allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserAlertmanagerConfig: true in the alertmanagerMain section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... alertmanagerMain: enableUserAlertmanagerConfig: true 1 # ... 1 Set the enableUserAlertmanagerConfig value to true to allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager. Save the file to apply the changes. The new configuration is applied automatically. 4.1.3.2. Enabling a separate Alertmanager instance for user-defined alert routing In some clusters, you might want to deploy a dedicated Alertmanager instance for user-defined projects, which can help reduce the load on the default platform Alertmanager instance and can better separate user-defined alerts from default platform alerts. In these cases, you can optionally enable a separate instance of Alertmanager to send alerts for user-defined projects only. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add enabled: true and enableAlertmanagerConfig: true in the alertmanager section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2 1 Set the enabled value to true to enable a dedicated instance of the Alertmanager for user-defined projects in a cluster. Set the value to false or omit the key entirely to disable the Alertmanager for user-defined projects. If you set this value to false or if the key is omitted, user-defined alerts are routed to the default platform Alertmanager instance. 2 Set the enableAlertmanagerConfig value to true to enable users to define their own alert routing configurations with AlertmanagerConfig objects. Save the file to apply the changes. The dedicated instance of Alertmanager for user-defined projects starts automatically. Verification Verify that the user-workload Alertmanager instance has started: # oc -n openshift-user-workload-monitoring get alertmanager Example output NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s 4.1.3.3. Granting users permission to configure alert routing for user-defined projects You can grant users permission to configure alert routing for user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled monitoring for user-defined projects. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the alert-routing-edit cluster role to a user in the user-defined project: USD oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1 1 For <namespace> , substitute the namespace for the user-defined project, such as ns1 . For <user> , substitute the username for the account to which you want to assign the role. Additional resources Configuring alert notifications 4.1.4. Granting users permissions for monitoring for user-defined projects As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects. You can also grant developers and other users different permissions: Monitoring user-defined projects Configuring the components that monitor user-defined projects Configuring alert routing for user-defined projects Managing alerts and silences for user-defined projects You can grant the permissions by assigning one of the following monitoring roles or cluster roles: Table 4.2. Monitoring roles Role name Description Project user-workload-monitoring-config-edit Users with this role can edit the user-workload-monitoring-config ConfigMap object to configure Prometheus, Prometheus Operator, Alertmanager, and Thanos Ruler for user-defined workload monitoring. openshift-user-workload-monitoring monitoring-alertmanager-api-reader Users with this role have read access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring monitoring-alertmanager-api-writer Users with this role have read and write access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring Table 4.3. Monitoring cluster roles Cluster role name Description Project monitoring-rules-view Users with this cluster role have read access to PrometheusRule custom resources (CRs) for user-defined projects. They can also view the alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-rules-edit Users with this cluster role can create, modify, and delete PrometheusRule CRs for user-defined projects. They can also manage alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-edit Users with this cluster role have the same privileges as users with the monitoring-rules-edit cluster role. Additionally, users can create, read, modify, and delete ServiceMonitor and PodMonitor resources to scrape metrics from services and pods. Can be bound with RoleBinding to any user project. alert-routing-edit Users with this cluster role can create, update, and delete AlertmanagerConfig CRs for user-defined projects. Can be bound with RoleBinding to any user project. Additional resources Granting users permission to configure monitoring for user-defined projects Granting users permission to configure alert routing for user-defined projects 4.1.4.1. Granting user permissions by using the web console You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to User Management RoleBindings Create binding . In the Binding Type section, select the Namespace Role Binding type. In the Name field, enter a name for the role binding. In the Namespace field, select the project where you want to grant the access. Important The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field. Select a monitoring role or cluster role from the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 4.1.4.2. Granting user permissions by using the CLI You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI ( oc ). Important Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure To assign a monitoring role to a user for a project, enter the following command: USD oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1 1 Substitute <role> with the wanted monitoring role, <user> with the user to whom you want to assign the role, and <namespace> with the project where you want to grant the access. To assign a monitoring cluster role to a user for a project, enter the following command: USD oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1 1 Substitute <cluster-role> with the wanted monitoring cluster role, <user> with the user to whom you want to assign the cluster role, and <namespace> with the project where you want to grant the access. 4.1.5. Excluding a user-defined project from monitoring Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring label to the project's namespace with a value of false . Procedure Add the label to the project namespace: USD oc label namespace my-project 'openshift.io/user-monitoring=false' To re-enable monitoring, remove the label from the namespace: USD oc label namespace my-project 'openshift.io/user-monitoring-' Note If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label. 4.1.6. Disabling monitoring for user-defined projects After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false in the cluster monitoring ConfigMap object. Note Alternatively, you can remove enableUserWorkload: true to disable monitoring for user-defined projects. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Set enableUserWorkload: to false under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are terminated in the openshift-user-workload-monitoring project. This might take a short while: USD oc -n openshift-user-workload-monitoring get pod Example output No resources found in openshift-user-workload-monitoring project. Note The user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap object. 4.2. Configuring performance and scalability for user workload monitoring You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources. 4.2.1. Controlling the placement and distribution of monitoring components You can move the monitoring stack components to specific nodes: Use the nodeSelector constraint with labeled nodes to move any of the monitoring stack components to specific nodes. Assign tolerations to enable moving components to tainted nodes. By doing so, you control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. Additional resources Using node selectors to move monitoring components 4.2.1.1. Moving monitoring components to different nodes You can move any of the components that monitor workloads for user-defined projects to specific worker nodes. Warning It is not permitted to move components to control plane or infrastructure nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: USD oc label nodes <node_name> <node_label> 1 1 Replace <node_name> with the name of the node where you want to add the label. Replace <node_label> with the name of the wanted label. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # ... <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 # ... 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node_label_1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed. Additional resources Enabling monitoring for user-defined projects Understanding how to update labels on nodes Placing pods on specific nodes using node selectors nodeSelector (Kubernetes documentation) 4.2.1.2. Assigning tolerations to monitoring components You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Enabling monitoring for user-defined projects Controlling pod placement using node taints Taints and Tolerations (Kubernetes documentation) 4.2.2. Managing CPU and memory resources for monitoring components You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components. You can configure these limits and requests for monitoring components that monitor user-defined projects in the openshift-user-workload-monitoring namespace. 4.2.2.1. Specifying limits and requests To configure CPU and memory resources, specify values for resource limits and requests in the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add values to define resource limits and requests for each component you want to configure. Important Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run. Example of setting resource limits and requests apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About specifying limits and requests for monitoring components Kubernetes requests and limits documentation (Kubernetes documentation) 4.2.3. Controlling the impact of unbound metrics attributes in user-defined projects Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects: Limit the number of samples that can be accepted per target scrape in user-defined projects Limit the number of scraped labels, the length of label names, and the length of label values Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped Note Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Additional resources Controlling the impact of unbound metrics attributes in user-defined projects Enabling monitoring for user-defined projects Determining why Prometheus is consuming a lot of disk space 4.2.3.1. Setting scrape sample and label limits for user-defined projects You can limit the number of samples that can be accepted per target scrape in user-defined projects. You can also limit the number of scraped labels, the length of label names, and the length of label values. Warning If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the enforcedSampleLimit configuration to data/config.yaml to limit the number of samples that can be accepted per target scrape in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1 1 A value is required if this parameter is specified. This enforcedSampleLimit example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000. Add the enforcedLabelLimit , enforcedLabelNameLengthLimit , and enforcedLabelValueLengthLimit configurations to data/config.yaml to limit the number of scraped labels, the length of label names, and the length of label values in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3 1 Specifies the maximum number of labels per scrape. The default value is 0 , which specifies no limit. 2 Specifies the maximum length in characters of a label name. The default value is 0 , which specifies no limit. 3 Specifies the maximum length in characters of a label value. The default value is 0 , which specifies no limit. Save the file to apply the changes. The limits are applied automatically. 4.2.3.2. Creating scrape sample alerts You can create alerts that notify you when: The target cannot be scraped or is not available for the specified for duration A scrape sample threshold is reached or is exceeded for the specified for duration Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit . You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf "%.4g" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11 1 Defines the name of the alerting rule. 2 Specifies the user-defined project where the alerting rule will be deployed. 3 The TargetDown alert will fire if the target cannot be scraped or is not available for the for duration. 4 The message that will be output when the TargetDown alert fires. 5 The conditions for the TargetDown alert must be true for this duration before the alert is fired. 6 Defines the severity for the TargetDown alert. 7 The ApproachingEnforcedSamplesLimit alert will fire when the defined scrape sample threshold is reached or exceeded for the specified for duration. 8 The message that will be output when the ApproachingEnforcedSamplesLimit alert fires. 9 The threshold for the ApproachingEnforcedSamplesLimit alert. In this example the alert will fire when the number of samples per target scrape has exceeded 80% of the enforced sample limit of 50000 . The for duration must also have passed before the alert will fire. The <number> in the expression scrape_samples_scraped/<number> > <threshold> must match the enforcedSampleLimit value defined in the user-workload-monitoring-config ConfigMap object. 10 The conditions for the ApproachingEnforcedSamplesLimit alert must be true for this duration before the alert is fired. 11 Defines the severity for the ApproachingEnforcedSamplesLimit alert. Apply the configuration to the user-defined project: USD oc apply -f monitoring-stack-alerts.yaml 4.2.4. Configuring pod topology spread constraints You can configure pod topology spread constraints for all the pods for user-defined monitoring to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You can configure pod topology spread constraints for monitoring pods by using the user-workload-monitoring-config config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the following settings under the data/config.yaml field to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option> 1 Specify a name of the component for which you want to set up pod topology spread constraints. 2 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. 3 Specify a key of node labels for topologyKey . Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. 4 Specify a value for whenUnsatisfiable . Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 5 Specify labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Example configuration for Thanos Ruler apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About pod topology spread constraints for monitoring Controlling pod placement by using pod topology spread constraints Pod Topology Spread Constraints (Kubernetes documentation) 4.3. Storing and recording data for user workload monitoring Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting. 4.3.1. Configuring persistent storage Run cluster monitoring with persistent storage to gain the following benefits: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. For production environments, it is highly recommended to configure persistent storage. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. 4.3.1.1. Persistent storage prerequisites Dedicate sufficient persistent storage to ensure that the disk does not become full. Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume. Important Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes. Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant. 4.3.1.2. Configuring a persistent volume claim To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the monitoring component for which you want to configure the PVC. 2 Specify an existing storage class. If a storage class is not specified, the default storage class is used. 3 Specify the amount of required storage. The following example configures a PVC that claims persistent storage for Thanos Ruler: Example PVC configuration apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. Warning When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Understanding persistent storage PersistentVolumeClaims (Kubernetes documentation) 4.3.1.3. Resizing a persistent volume You can resize a persistent volume (PV) for the instances of Prometheus, Thanos Ruler, and Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. Important You can only expand the size of the PVC. Shrinking the storage size is not possible. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have configured at least one PVC for components that monitor user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes . Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a new storage size for the PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2 1 The component for which you want to change the storage size. 2 Specify the new size for the storage volume. It must be greater than the value. The following example sets the new PVC request to 20 gigabytes for Thanos Ruler: Example storage configuration for thanosRuler apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Warning When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Prometheus database storage requirements Expanding persistent volume claims (PVCs) with a file system 4.3.2. Modifying retention time and size for Prometheus metrics data By default, Prometheus retains metrics data for 24 hours for monitoring for user-defined projects. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time and size configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2 1 The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . 2 The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance: Example of setting retention time for Prometheus apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 4.3.2.1. Modifying the retention time for Thanos Ruler metrics data By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1 1 Specify the retention time in the following format: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . The default is 24h . The following example sets the retention time to 10 days for Thanos Ruler data: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Retention time and size for Prometheus metrics Enabling monitoring for user-defined projects Prometheus database storage requirements Recommended configurable storage technology Understanding persistent storage Optimizing storage 4.3.3. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler. The following log levels can be applied to the relevant component in the user-workload-monitoring-config ConfigMap object: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. Available component values are prometheus , alertmanager , prometheusOperator , and thanosRuler . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the prometheus-operator deployment: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 4.3.4. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the queryLogFile parameter for Prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1 1 Add the full path to the file in which queries will be logged. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verify that the pods for the component are running. The following sample command lists the status of pods: USD oc -n openshift-user-workload-monitoring get pods Example output ... prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m ... Read the query log: USD oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources Enabling monitoring for user-defined projects 4.4. Configuring metrics for user workload monitoring Configure the collection of metrics to monitor how cluster components and your own workloads are performing. You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters. Additional resources Understanding metrics 4.4.1. Configuring remote write storage You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. Important Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support. You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the openshift-user-workload-monitoring namespace. Warning To reduce security risks, use HTTPS and authentication to send metrics to an endpoint. Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a remoteWrite: section under data/config.yaml/prometheus , as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_credentials> 2 1 The URL of the remote write endpoint. 2 The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods. Add write relabel configuration values after the authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1 1 Add configuration for metrics that you want to send to the remote endpoint. Example of forwarding a single metric called my_metric apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep Example of forwarding metrics called my_metric_1 and my_metric_2 in my_namespace namespace apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep Save the file to apply the changes. The new configuration is applied automatically. 4.4.1.1. Supported remote write authentication settings You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write. Authentication method Config map field Description AWS Signature Version 4 sigv4 This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. Basic authentication basicAuth Basic authentication sets the authorization header on every remote write request with the configured username and password. authorization authorization Authorization sets the Authorization header on every remote write request using the configured token. OAuth 2.0 oauth2 An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication. TLS client tlsConfig A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. 4.4.1.2. Example remote write authentication settings The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with monitoring for user-defined projects in the openshift-user-workload-monitoring namespace. 4.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-user-workload-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque 1 The AWS API access key. 2 The AWS API secret key. The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7 1 The AWS region. 2 4 The name of the Secret object containing the AWS API access credentials. 3 The key that contains the AWS API access key in the specified Secret object. 5 The key that contains the AWS API secret key in the specified Secret object. 6 The name of the AWS profile that is being used to authenticate. 7 The unique identifier for the Amazon Resource Name (ARN) assigned to your role. 4.4.1.2.2. Sample YAML for Basic authentication The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque 1 The username. 2 The password. The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint. apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4 1 3 The name of the Secret object that contains the authentication credentials. 2 The key that contains the username in the specified Secret object. 4 The key that contains the password in the specified Secret object. 4.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret Object The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque 1 The authentication token. The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3 1 The authentication type of the request. The default value is Bearer . 2 The name of the Secret object that contains the authentication credentials. 3 The key that contains the authentication token in the specified Secret object. 4.4.1.2.4. Sample YAML for OAuth 2.0 authentication The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque 1 The Oauth 2.0 ID. 2 The OAuth 2.0 secret. The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2> 1 3 The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object. 2 4 The key that contains the OAuth 2.0 credentials in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . 6 The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access. 7 The OAuth 2.0 authorization request parameters required for the authorization server. 4.4.1.2.5. Sample YAML for TLS client authentication The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-user-workload-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls 1 The CA certificate in the Prometheus container with which to validate the server certificate. 2 The client certificate for authentication with the server. 3 The client key. The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle . apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6 1 3 5 The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object. 2 The key in the specified Secret object that contains the CA certificate for the endpoint. 4 The key in the specified Secret object that contains the client certificate for the endpoint. 6 The key in the specified Secret object that contains the client key secret. 4.4.1.3. Example remote write queue configuration You can use the queueConfig object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for monitoring for user-defined projects in the openshift-user-workload-monitoring namespace. Example configuration of remote write parameters with default values apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 1 The number of samples to buffer per shard before they are dropped from the queue. 2 The minimum number of shards. 3 The maximum number of shards. 4 The maximum number of samples per send. 5 The maximum time for a sample to wait in buffer. 6 The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time. 7 The maximum time to wait before retrying a failed request. 8 Set this parameter to true to retry a request after receiving a 429 status code from the remote write storage. Additional resources Prometheus REST API reference for remote write Setting up remote write compatible endpoints (Prometheus documentation) Tuning remote write settings (Prometheus documentation) Understanding secrets 4.4.2. Creating cluster ID labels for metrics You can create cluster ID labels for metrics by adding the write_relabel settings for remote write storage in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Note When Prometheus scrapes user workload targets that expose a namespace label, the system stores this label as exported_namespace . This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the honorLabels field to true for PodMonitor or ServiceMonitor objects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have configured remote write storage. Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config In the writeRelabelConfigs: section under data/config.yaml/prometheus/remoteWrite , add cluster ID relabel configuration values: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2 1 Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. 2 Substitute the label configuration for the metrics sent to the remote write endpoint. The following sample shows how to forward a metric with the cluster ID label cluster_id : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3 1 The system initially applies a temporary cluster ID source label named __tmp_openshift_cluster_id__ . This temporary label gets replaced by the cluster ID label name that you specify. 2 Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__ . The final relabeling step removes labels that use this name. 3 The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. Save the file to apply the changes. The new configuration is applied automatically. Additional resources Adding cluster ID labels to metrics Obtaining your cluster ID 4.4.3. Setting up metrics collection for user-defined projects You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name. This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored. 4.4.3.1. Deploying a sample service To test monitoring of a service in a user-defined project, you can deploy a sample service. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml . Add the following deployment and service configuration details to the file: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric. Apply the configuration to the cluster: USD oc apply -f prometheus-example-app.yaml It takes some time to deploy the service. You can check that the pod is running: USD oc -n ns1 get pod Example output NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m 4.4.3.2. Specifying how a service is monitored To use the metrics exposed by your service, you must configure OpenShift Container Platform monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or the monitoring-edit cluster role. You have enabled monitoring for user-defined projects. For this example, you have deployed the prometheus-example-app sample service in the ns1 project. Note The prometheus-example-app sample service does not support TLS authentication. Procedure Create a new YAML configuration file named example-app-service-monitor.yaml . Add a ServiceMonitor resource to the YAML file. The following example creates a service monitor named prometheus-example-monitor to scrape metrics exposed by the prometheus-example-app service in the ns1 namespace: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app 1 Specify a user-defined namespace where your service runs. 2 Specify endpoint ports to be scraped by Prometheus. 3 Configure a selector to match your service based on its metadata labels. Note A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored. Apply the configuration to the cluster: USD oc apply -f example-app-service-monitor.yaml It takes some time to deploy the ServiceMonitor resource. Verify that the ServiceMonitor resource is running: USD oc -n <namespace> get servicemonitor Example output NAME AGE prometheus-example-monitor 81m 4.4.3.3. Example service endpoint authentication settings You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor and PodMonitor custom resource definitions (CRDs). The following samples show different authentication settings for a ServiceMonitor resource. Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. 4.4.3.3.1. Sample YAML authentication with a bearer token The following sample shows bearer token settings for a Secret object named example-bearer-auth in the ns1 namespace: Example bearer token secret apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1 1 Specify an authentication token. The following sample shows bearer token authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-bearer-auth : Example bearer token authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the authentication token in the specified Secret object. 2 The name of the Secret object that contains the authentication credentials. Important Do not use bearerTokenFile to configure bearer token. If you use the bearerTokenFile configuration, the ServiceMonitor resource is rejected. 4.4.3.3.2. Sample YAML for Basic authentication The following sample shows Basic authentication settings for a Secret object named example-basic-auth in the ns1 namespace: Example Basic authentication secret apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2 1 Specify a username for authentication. 2 Specify a password for authentication. The following sample shows Basic authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-basic-auth : Example Basic authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the username in the specified Secret object. 2 4 The name of the Secret object that contains the Basic authentication. 3 The key that contains the password in the specified Secret object. 4.4.3.3.3. Sample YAML authentication with OAuth 2.0 The following sample shows OAuth 2.0 settings for a Secret object named example-oauth2 in the ns1 namespace: Example OAuth 2.0 secret apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 1 Specify an Oauth 2.0 ID. 2 Specify an Oauth 2.0 secret. The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-oauth2 : Example OAuth 2.0 authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the OAuth 2.0 ID in the specified Secret object. 2 4 The name of the Secret object that contains the OAuth 2.0 credentials. 3 The key that contains the OAuth 2.0 secret in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . Additional resources Enabling monitoring for user-defined projects Scrape Prometheus metrics using TLS in ServiceMonitor configuration (Red Hat Customer Portal article) PodMonitor API ServiceMonitor API 4.5. Configuring alerts and notifications for user workload monitoring You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information. 4.5.1. Configuring external Alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances to route alerts for user-defined projects. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/<component> : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2 2 Substitute <alertmanager_specification> with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). 1 Substitute <component> for one of two supported external Alertmanager components: prometheus or thanosRuler . The following sample config map configures an additional Alertmanager for Thanos Ruler by using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 4.5.2. Configuring secrets for Alertmanager The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver. For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object. 4.5.2.1. Adding a secret to the Alertmanager configuration You can add secrets to the Alertmanager configuration by editing the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project. After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have created the secret to be configured in Alertmanager in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a secrets: section under data/config.yaml/alertmanager with the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2> 1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. 2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token Save the file to apply the changes. The new configuration is applied automatically. 4.5.3. Attaching additional labels to your time series and alerts You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Define labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. Note In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules. For example, to add metadata about the region and environment to all time series and alerts, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Enabling monitoring for user-defined projects 4.5.4. Configuring alert notifications In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects with one of the following methods: Use the default platform Alertmanager instance. Use a separate Alertmanager instance only for user-defined projects. Developers and other users with the alert-routing-edit cluster role can configure custom alert notifications for their user-defined projects by configuring alert receivers. Note Review the following limitations of alert routing for user-defined projects: User-defined alert routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace ns1 only applies to PrometheusRules resources in the same namespace. When a namespace is excluded from user-defined monitoring, AlertmanagerConfig resources in the namespace cease to be part of the Alertmanager configuration. Additional resources Understanding alert routing for user-defined projects Sending notifications to external systems PagerDuty (PagerDuty official site) Prometheus Integration Guide (PagerDuty official site) Support version matrix for monitoring components Enabling alert routing for user-defined projects 4.5.4.1. Configuring alert routing for user-defined projects If you are a non-administrator user who has been given the alert-routing-edit cluster role, you can create or edit alert routing for user-defined projects. Prerequisites A cluster administrator has enabled monitoring for user-defined projects. A cluster administrator has enabled alert routing for user-defined projects. You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml . Add an AlertmanagerConfig YAML definition to the file. For example: apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post Save the file. Apply the resource to the cluster: USD oc apply -f example-app-alert-routing.yaml The configuration is automatically applied to the Alertmanager pods. 4.5.4.2. Configuring alert routing for user-defined projects with the Alertmanager secret If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload secret in the openshift-user-workload-monitoring namespace. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled a separate instance of Alertmanager for user-defined alert routing. You have installed the OpenShift CLI ( oc ). Procedure Print the currently active Alertmanager configuration into the file alertmanager.yaml : USD oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : route: receiver: Default group_by: - name: Default routes: - matchers: - "service = prometheus-example-monitor" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3 1 Specify labels to match your alerts. This example targets all alerts that have the service="prometheus-example-monitor" label. 2 Specify the name of the receiver to use for the alerts group. 3 Specify the receiver configuration. Apply the new configuration in the file: USD oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=- 4.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts. | [
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1",
"oc -n openshift-user-workload-monitoring get pod",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h",
"oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring",
"oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring",
"oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring",
"Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # alertmanagerMain: enableUserAlertmanagerConfig: true 1 #",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2",
"oc -n openshift-user-workload-monitoring get alertmanager",
"NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s",
"oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1",
"oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1",
"oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1",
"oc label namespace my-project 'openshift.io/user-monitoring=false'",
"oc label namespace my-project 'openshift.io/user-monitoring-'",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false",
"oc -n openshift-user-workload-monitoring get pod",
"No resources found in openshift-user-workload-monitoring project.",
"oc label nodes <node_name> <node_label> 1",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11",
"oc apply -f monitoring-stack-alerts.yaml",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1",
"oc -n openshift-user-workload-monitoring get pods",
"prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep",
"apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7",
"apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4",
"apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3",
"apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>",
"apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3",
"apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP",
"oc apply -f prometheus-example-app.yaml",
"oc -n ns1 get pod",
"NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n <namespace> get servicemonitor",
"NAME AGE prometheus-example-monitor 81m",
"apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod",
"apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post",
"oc apply -f example-app-alert-routing.yaml",
"oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3",
"oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/monitoring/configuring-user-workload-monitoring |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.1/providing-direct-documentation-feedback_openjdk |
Chapter 2. Common Vulnerabilities and Exposures (CVEs) | Chapter 2. Common Vulnerabilities and Exposures (CVEs) Common Vulnerabilities and Exposures (CVEs) are security vulnerabilities identified in publicly released software packages. CVEs are identified and listed by the National Cybersecurity FFRDC (NCF), the federally funded research and development center operated by the Mitre Corporation, with funding from the National Cyber Security Division of the United States Department of Homeland Security. The complete list of CVEs is available at https://cve.mitre.org . By highlighting CVEs with publicly known exploits and security rules associated with CVEs, the vulnerability service surfaces enhanced threat intelligence to aid in determining which CVEs pose the greatest potential risk to RHEL environments, enabling our users to effectively prioritize and address their most critical issues first. Important The vulnerability service does not contain every CVE included in the list of entries at https://cve.mitre.org . Only Red Hat CVEs, those CVEs for which Red Hat issues security advisories (RHSAs), are included in the vulnerability service. The vulnerability service identifies CVEs impacting your RHEL systems, indicates the severity and enables you to efficiently triage the exposures that are most critical to resolve. The dashbar will alert you to the following types of CVEs: Known exploits Security rules Critical severity Important severity 2.1. Red Hat Security Advisories (RHSAs) Red Hat Security Advisory (RHSA) errata document security vulnerabilities in Red Hat products for which there are remediations or mitigations available. The Red Hat Insights for Red Hat Enterprise Linux vulnerability service displays the advisory identifier tied to each system exposed to a CVE. View this information by selecting a CVE and selecting the Filter by affected systems link in the security rule card. If an advisory exists for the system, the RHSA ID is visible as a link to the system in the Exposed systems list, Advisory column. When there are no such advisories, the Advisory column is not visible, or will show "Not available." When an advisory exists for a system, users can view more information about the RHSA, including a list of affected systems. In the patch service, users can select systems to create an Ansible Playbook to apply the remediation. 2.2. Security rules Security rules are CVEs given additional visibility due to the elevated risk and exposure associated with them. These are security flaws that may receive significant media coverage and have been scrutinized by the Red Hat Product Security team, using the Product Security Incident Response Plan workflow to help determine your RHEL environment exposure. These security rules enable you to take the appropriate action to protect your organization. Security rules provide deep threat intelligence, beyond analyzing the version of RHEL running on a system. Security rules are manually curated to determine whether you are susceptible to a security threat by analyzing system metadata collected by the Insights client. If the vulnerability service identifies a system as exposed to a security rule, there is the potential for elevated security risk and issues should be addressed with urgency. Important Addressing security rules on exposed systems should be your highest priority. Finally, not all systems exposed to a CVE are also exposed to a security rule associated with that CVE. Even though you may be running a vulnerable version of software, other environmental conditions may mitigate the threat; for example, a specific port is closed or if you are running SELinux. 2.2.1. Identifying security rules in the Insights for RHEL dashboard Use the following steps to view your infrastructure exposure to security rules. Procedure Navigate to the Red Hat Insights for Red Hat Enterprise Linux dashboard . Note For simplicity, panels for services not related to security vulnerability assessment are minimized in the following screenshot. : View the Latest critical notifications on your systems panel. These are security rules with an elevated severity rating of "Important" or "Critical." These are potentially your most critical issues and should be prioritized for remediation. To the right of each notification, click the Expand button to see associated CVEs and the number of systems exposed in your infrastructure. Note You may see security rules in your critical notifications but have zero systems exposed. In this case, even though the CVE is present in your infrastructure, the security rule conditions may not exist. Below the name of the security rule, and under Associated CVEs, click the CVE ID link. View which of your systems is impacted by the security rule CVE and optionally select exposed systems to create playbooks. , view the information in the vulnerability card. Note the number of "CVEs with security rules impacting systems." This number includes security rules of any severity impacting at least one system. Click View CVEs . Consider lesser-severity security rules your second highest priority for remediation, following high-severity security rules. 2.3. Known exploits Red Hat analyzes Cybersecurity and Infrastructure Security Agency (CISA) data, to determine whether code exists publicly to exploit a CVE, or a CVE has already been exploited publicly. The vulnerability service applies the "Known exploits" label to CVEs that meet that criteria. This enhanced threat assessment can help users identify and address those CVEs that pose the most critical risks first. Red Hat recommends users review any CVEs with the "Known exploit" label with high priority and work towards remediating those issues. Important The vulnerability service makes you aware that the known-exploit CVE exists on systems in your infrastructure. The "Known exploits" label does not mean that the vulnerability was exploited on your RHEL systems; the vulnerability service does not make that determination. 2.4. Common Vulnerabilities and Exposures provide deep threat intelligence with triage feature The vulnerability service provides you with data about individual Common Vulnerabilities and Exposures (CVEs) and their effect on your systems registered to Insights. CVEs are categorized as vulnerable or affected but not vulnerable. This level of threat intelligence is available for CVEs that have the Security Rule label or for those that have gone through Red Hat Product Security's rigorous analysis. This increased threat intelligence enables you to triage issues and address the most urgent ones first. When managing a large fleet of servers, this translates into expedited protection and significant efficiencies. An affected but not vulnerable CVE status indicates that you are running software that has a vulnerability in it but is not currently exploitable. This system will need remediation but does not require immediate attention. A vulnerable CVE status indicates flawed code with an open path to exploitation. An open path could be a port or an OS version that permits one of the following: confidential information to be leaked, the integrity of the system to be compromised or availability of the system to be hindered. Let us look at an example of a vulnerable server versus an affected but not vulnerable server: Suppose that Server A is running vulnerable software that allows root access to the system. Server A would be considered vulnerable and require immediate patching. In contrast, suppose that Server B's current configuration prevents the vulnerability from manifesting, even when present in the affected code. Server B would be considered affected but not vulnerable . This would mean that Server B could be relegated to the to-do list, so that the more immediate threat, Server A could be remediated. Important You should patch Server B once Server A has been addressed since it is running potentially vulnerable code. Version updates and other events could render it vulnerable in the future. 2.4.1. Identifying known-exploit CVEs in the Red Hat Insights for RHEL dashboard Use the following steps to identify known-exploit CVEs in the Insights for Red Hat Enterprise Linux dashboard vulnerability card. Procedure Navigate to the Red Hat Insights for Red Hat Enterprise Linux dashboard . Note For simplicity, panels for services not related to security vulnerability assessment are minimized in the following screenshot. : On the Vulnerability card, note the CVEs with Known exploits impacting 1 or more systems and the number displayed . Click View Known exploits . View the filtered list of Known-exploit CVEs in the CVEs list. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_vulnerabilities_on_rhel_systems/vuln-cves_vuln-overview |
3.2. Load Balancer Using Direct Routing | 3.2. Load Balancer Using Direct Routing Direct routing allows real servers to process and route packets directly to a requesting user rather than passing outgoing packets through the LVS router. Direct routing requires that the real servers be physically connected to a network segment with the LVS router and be able to process and direct outgoing packets as well. Network Layout In a direct routing Load Balancer setup, the LVS router needs to receive incoming requests and route them to the proper real server for processing. The real servers then need to directly route the response to the client. So, for example, if the client is on the Internet, and sends the packet through the LVS router to a real server, the real server must be able to connect directly to the client through the Internet. This can be done by configuring a gateway for the real server to pass packets to the Internet. Each real server in the server pool can have its own separate gateway (and each gateway with its own connection to the Internet), allowing for maximum throughput and scalability. For typical Load Balancer setups, however, the real servers can communicate through one gateway (and therefore one network connection). Hardware The hardware requirements of a Load Balancer system using direct routing is similar to other Load Balancer topologies. While the LVS router needs to be running Red Hat Enterprise Linux to process the incoming requests and perform load-balancing for the real servers, the real servers do not need to be Linux machines to function correctly. The LVS routers need one or two NICs each (depending on if there is a backup router). You can use two NICs for ease of configuration and to distinctly separate traffic; incoming requests are handled by one NIC and routed packets to real servers on the other. Since the real servers bypass the LVS router and send outgoing packets directly to a client, a gateway to the Internet is required. For maximum performance and availability, each real server can be connected to its own separate gateway which has its own dedicated connection to the network to which the client is connected (such as the Internet or an intranet). Software There is some configuration outside of keepalived that needs to be done, especially for administrators facing ARP issues when using Load Balancer by means of direct routing. Refer to Section 3.2.1, "Direct Routing Using arptables" or Section 3.2.3, "Direct Routing Using iptables" for more information. 3.2.1. Direct Routing Using arptables In order to configure direct routing using arptables , each real server must have their virtual IP address configured, so they can directly route packets. ARP requests for the VIP are ignored entirely by the real servers, and any ARP packets that might otherwise be sent containing the VIPs are mangled to contain the real server's IP instead of the VIPs. Using the arptables method, applications may bind to each individual VIP or port that the real server is servicing. For example, the arptables method allows multiple instances of Apache HTTP Server to be running and bound explicitly to different VIPs on the system. However, using the arptables method, VIPs cannot be configured to start on boot using standard Red Hat Enterprise Linux system configuration tools. To configure each real server to ignore ARP requests for each virtual IP addresses, perform the following steps: Create the ARP table entries for each virtual IP address on each real server (the real_ip is the IP the director uses to communicate with the real server; often this is the IP bound to eth0 ): This will cause the real servers to ignore all ARP requests for the virtual IP addresses, and change any outgoing ARP responses which might otherwise contain the virtual IP so that they contain the real IP of the server instead. The only node that should respond to ARP requests for any of the VIPs is the current active LVS node. Once this has been completed on each real server, save the ARP table entries by typing the following commands on each real server: arptables-save > /etc/sysconfig/arptables systemctl enable arptables.service The systemctl enable command will cause the system to reload the arptables configuration on bootup before the network is started. Configure the virtual IP address on all real servers using ip addr to create an IP alias. For example: Configure Keepalived for Direct Routing. This can be done by adding lb_kind DR to the keepalived.conf file. Refer to Chapter 4, Initial Load Balancer Configuration with Keepalived for more information. 3.2.2. Direct Routing Using firewalld You may also work around the ARP issue using the direct routing method by creating firewall rules using firewalld . To configure direct routing using firewalld , you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system. The firewalld method is simpler to configure than the arptables method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address or addresses exist only on the active LVS director. However, there are performance issues using the firewalld method compared to arptables , as there is overhead in forwarding every return packet. You also cannot reuse ports using the firewalld method. For example, it is not possible to run two separate Apache HTTP Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses. To configure direct routing using the firewalld method, perform the following steps on every real server: Ensure that firewalld is running. Ensure that firewalld is enabled to start at system start. Enter the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server. This command will cause the real servers to process packets destined for the VIP and port that they are given. Reload the firewall rules and keep the state information. The current permanent configuration will become the new firewalld runtime configuration as well as the configuration at the system start. 3.2.3. Direct Routing Using iptables You may also work around the ARP issue using the direct routing method by creating iptables firewall rules. To configure direct routing using iptables , you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system. The iptables method is simpler to configure than the arptables method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address(es) only exist on the active LVS director. However, there are performance issues using the iptables method compared to arptables , as there is overhead in forwarding/masquerading every packet. You also cannot reuse ports using the iptables method. For example, it is not possible to run two separate Apache HTTP Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses. To configure direct routing using the iptables method, perform the following steps: On each real server, enter the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server: iptables -t nat -A PREROUTING -p <tcp|udp> -d <vip> --dport <port> -j REDIRECT This command will cause the real servers to process packets destined for the VIP and port that they are given. Save the configuration on each real server: The systemctl enable command will cause the system to reload the iptables configuration on bootup before the network is started. 3.2.4. Direct Routing Using sysctl Another way to deal with the ARP limitation when employing Direct Routing is using the sysctl interface. Administrators can configure two systcl settings such that the real server will not announce the VIP in ARP requests and will not reply to ARP requests for the VIP address. To enable this, enter the following commands: Alternatively, you may add the following lines to the /etc/sysctl.d/arp.conf file: | [
"arptables -A IN -d <virtual_ip> -j DROP arptables -A OUT -s <virtual_ip> -j mangle --mangle-ip-s <real_ip>",
"ip addr add 192.168.76.24 dev eth0",
"systemctl start firewalld",
"systemctl enable firewalld",
"firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d vip -p tcp|udp -m tcp|udp --dport port -j REDIRECT",
"firewall-cmd --reload",
"iptables-save > /etc/sysconfig/iptables systemctl enable iptables.service",
"echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce",
"net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.eth0.arp_announce = 2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-direct-VSA |
22.3. One-Time Passwords | 22.3. One-Time Passwords Important The IdM solution for OTP authentication is only supported for clients running Red Hat Enterprise Linux 7.1 or later. One-time password (OTP) is a password valid for only one authentication session and becomes invalid after use. Unlike a traditional static password, OTP generated by an authentication token keeps changing. OTPs are used as part of two-factor authentication: The user authenticates with a traditional password. The user provides an OTP code generated by a recognized OTP token. Two-factor authentication is considered safer than authentication using a traditional password alone. Even if a potential intruder intercepts the OTP during login, the intercepted OTP will already be invalid by that point because it can only be used for successful authentication once. Warning The following security and other limitations currently relate to the OTP support in IdM: The most important security limitation is the potential vulnerability to replay attacks across the system. Replication is asynchronous, and an OTP code can therefore be reused during the replication period. A user might be able to log on to two servers at the same time. However, this vulnerability is usually difficult to exploit due to comprehensive encryption. It is not possible to obtain a ticket-granting ticket (TGT) using a client that does not support OTP authentication. This might affect certain use cases, such as authentication using the mod_auth_kerb module or the Generic Security Services API (GSSAPI). It is not possible to use password + OTP in the IdM solution if the FIPS mode is enabled. 22.3.1. How OTP Authentication Works in IdM 22.3.1.1. OTP Tokens Supported in IdM Software and Hardware Tokens IdM supports both software and hardware tokens. User-managed and Administrator-managed Tokens Users can manage their own tokens, or the administrator can manage their tokens for them: User-managed tokens Users have full control over user-managed tokens in Identity Management: they are allowed to create, edit, or delete their tokens. Administrator-managed tokens The administrator adds administrator-managed tokens to the users' accounts. Users themselves have read-only access for such tokens: they do not have the permission to manage or modify the tokens and they are not required to configure them in any way. Note that users cannot delete or deactivate a token if it is their only active token at the moment. As an administrator, you cannot delete or deactivate your last active token, but you can delete or deactivate the last active token of another user. Supported OTP Algorithms Identity Management supports the following two standard OTP mechanisms: The HMAC-Based One-Time Password (HOTP) algorithm is based on a counter. HMAC stands for Hashed Message Authentication Code. The Time-Based One-Time Password (TOTP) algorithm is an extension of HOTP to support time-based moving factor. 22.3.1.2. Available OTP Authentication Methods When enabling OTP authentication, you can choose from the following authentication methods: Two-factor authentication (password + OTP) With this method, the user is always required to enter both a standard password and an OTP code. Password With this method, the user still has the option to authenticate using a standard password only. RADIUS proxy server authentication For information on configuring a RADIUS server for OTP validation, see Section 22.3.7, "Migrating from a Proprietary OTP Solution" . Global and User-specific Authentication Methods You can configure these authentication methods either globally or for individual users: By default, user-specific authentication method settings take precedence over global settings. If no authentication method is set for a user, the globally-defined methods apply. You can disable per-user authentication method settings for any user. This ensures IdM ignores the per-user settings and always applies the global settings for the user. Combining Multiple Authentication Methods If you set multiple methods at once, either one of them will be sufficient for successful authentication. For example: If you configure both two-factor and password authentication, the user must provide the password (first factor), but providing the OTP (second factor) is optional when using the command line: In the web UI, the user must still provide both factors. Note Individual hosts or services might be configured to require a certain authentication method, for example OTP. If you attempt to authenticate to such hosts or services using the first factor only, you will be denied access. See Section 22.4, "Restricting Access to Services and Hosts Based on How Users Authenticate" . However, a minor exception exists when RADIUS and another authentication method are configured: Kerberos will always use RADIUS, but LDAP will not. LDAP only recognizes the password and two-factor authentication methods. If you use an external two-factor authentication provider, use Kerberos from your applications. If you want to let users authenticate with a password only, use LDAP. It is recommended that the applications leverage Apache modules and SSSD, which allows to configure either Kerberos or LDAP. 22.3.1.3. GNOME Keyring Service Support IdM integrates OTP authentication with the GNOME Keyring service. Note that GNOME Keyring integration requires the user to enter the first and second factors separately: 22.3.1.4. Offline Authentication with OTP IdM supports offline OTP authentication. However, to be able to log in offline, the user must first authenticate when the system is online by entering the static password and OTP separately: If both passwords are entered separately like this when logging in online, the user will subsequently be able to authenticate even if the central authentication server is unavailable. Note that IdM only prompts for the first-factor traditional static password when the user authenticates offline. IdM also supports entering both the static password and OTP together in one string in the First factor prompt. However, note that this is not compatible with offline OTP authentication. If the user enters both factors in a single prompt, IdM will always have to contact the central authentication server when authenticating, which requires the system to be online. Important If you use OTP authentication on devices that also operate offline, such as laptops, Red Hat recommends to enter the static password and OTP separately to make sure offline authentication will be available. Otherwise, IdM will not allow you to log in after the system goes offline. If you want to benefit from OTP offline authentication, apart from entering the static and OTP passwords separately, also make sure to meet the following conditions: The cache_credentials option in the /etc/sssd/sssd.conf file is set to True , which enables caching the first factor password. The first-factor static password meets the password length requirement defined in the cache_credentials_minimal_first_factor_length option set in /etc/sssd/sssd.conf . The default minimal length is 8 characters. For more information about the option, see the sssd.conf (5) man page. Note that even if the krb5_store_password_if_offline option is set to true in /etc/sssd/sssd.conf , SSSD does not attempt to refresh the Kerberos ticket-granting ticket (TGT) when the system goes online again because the OTP might already be invalid at that point. To obtain a TGT in this situation, the user must authenticate again using both factors. 22.3.2. Required Settings for Configuring a RADIUS Proxy on an IdM Server Running in FIPS Mode In Federal Information Processing Standard (FIPS) mode, OpenSSL disables the use of the MD5 digest algorithm by default. Consequently, as the RADIUS protocol requires MD5 to encrypt a secret between the RADIUS client and the RADIUS server, the unavailability of MD5 in FIPS mode results in the IdM RADIUS proxy server to fail. If the RADIUS server is running on the same host as the IdM master, you can work around the problem and enable MD5 within the secure perimeter, by performing the following steps: Create the /etc/systemd/system/radiusd.service.d/ipa-otp.conf file with the following content: Reload the systemd configuration: Start the radiusd service: 22.3.3. Enabling Two Factor Authentication For details on the available authentication methods related to OTP, see Section 22.3.1.2, "Available OTP Authentication Methods" . To enable two factor authentication using: the web UI, see the section called "Web UI: Enabling Two Factor Authentication" . the command line, see the section called "Command Line: Enabling Two Factor Authentication" . Web UI: Enabling Two Factor Authentication To set authentication methods globally for all users: Select IPA Server Configuration . In the User Options area, select the required Default user authentication types . Figure 22.4. User Authentication Methods To ensure the global settings are not overridden with per-user settings, select Disable per-user override . If you do not select Disable per-user override , authentication methods configured per user take precedence over the global settings. To set authentication methods individually on a per-user basis: Select Identity Users , and click the name of the user to edit. In the Account Settings area, select the required User authentication types . Figure 22.5. User Authentication Methods Command Line: Enabling Two Factor Authentication To set authentication methods globally for all users: Run the ipa config-mod --user-auth-type command. For example, to set the global authentication method to two-factor authentication: For a list of values accepted by --user-auth-type , run the ipa config-mod --help command. To disable per-user overrides, thus ensuring the global settings are not overridden with per-user settings, add the --user-auth-type=disabled option as well. For example, to set the global authentication method to two-factor authentication and disable per-user overrides: If you do not set --user-auth-type=disabled , authentication methods configured per user take precedence over the global settings. To set authentication methods individually for a specified user: Run the ipa user-mod --user-auth-type command. For example, to set that user will be required to use two-factor authentication: To set multiple authentication methods, add --user-auth-type multiple times. For example, to configure both password and two-factor authentication globally for all users: 22.3.4. Adding a User-Managed Software Token Log in with your standard password. Make sure the FreeOTP Authenticator application is installed on your mobile device. To download FreeOTP Authenticator , see the FreeOTP source page . Create the software token in the IdM web UI or from the command line. To create the token in the web UI, click Add under the OTP tokens tab. If you are logged-in as the administrator, the OTP Tokens tab is accessible through the Authentication tab. Figure 22.6. Adding an OTP Token for a User To create the token from the command line, run the ipa otptoken-add command. For more information about ipa otptoken-add , run the command with the --help option added. A QR code is displayed in the web UI or on the command line. Scan the QR code with FreeOTP Authenticator to provision the token to the mobile device. 22.3.5. Adding a User-Managed YubiKey Hardware Token A programmable hardware token, such as a YubiKey token, can only be added from the command line. To add a YubiKey hardware token as the user owning the token: Log in with your standard password. Insert your YubiKey token. Run the ipa otptoken-add-yubikey command. If the YubiKey has an empty slot available, the command will select the empty slot automatically. If no empty slot is available, you must select a slot manually using the --slot option.For example: Note that this overwrites the selected slot. 22.3.6. Adding a Token for a User as the Administrator To add a software token as the administrator: Make sure you are logged-in as the administrator. Make sure the FreeOTP Authenticator application is installed on the mobile device. To download FreeOTP Authenticator , see the FreeOTP source page . Create the software token in the IdM web UI or from the command line. To create the token in the web UI, select Authentication OTP Tokens and click Add at the top of the list of OTP tokens. In the Add OTP Token form, select the owner of the token. Figure 22.7. Adding an Administrator-Managed Software Token To create the token from the command line, run the ipa otptoken-add command with the --owner option. For example: A QR code is displayed in the web UI or on the command line. Scan the QR code with FreeOTP Authenticator to provision the token to the mobile device. To add a programmable hardware token, such as a YubiKey token, as the administrator: Make sure you are logged-in as the administrator. Insert the YubiKey token. Run the ipa otptoken-add-yubikey command with the --owner option. For example: 22.3.7. Migrating from a Proprietary OTP Solution To enable the migration of a large deployment from a proprietary OTP solution to the IdM-native OTP solution, IdM offers a way to offload OTP validation to a third-party RADIUS server for a subset of users. The administrator creates a set of RADIUS proxies where each proxy can only reference a single RADIUS server. If more than one server needs to be addressed, it is recommended to create a virtual IP solution that points to multiple RADIUS servers. Such a solution needs to be built outside of RHEL IdM with the help of the keepalived daemon, for example. The administrator then assigns one of these proxy sets to a user. As long as the user has a RADIUS proxy set assigned, IdM bypasses all other authentication mechanisms. Note IdM does not provide any token management or synchronization support for tokens in the third-party system. To configure a RADIUS server for OTP validation and to add a user to the proxy server: Make sure that the radius user authentication method is enabled. See Section 22.3.3, "Enabling Two Factor Authentication" for details. Run the ipa radiusproxy-add proxy_name --secret secret command to add a RADIUS proxy. The command prompts you for inserting the required information. The configuration of the RADIUS proxy requires the use of a common secret between the client and the server to wrap credentials. Specify this secret in the --secret parameter. Run the ipa user-mod radiususer --radius= proxy_name command to assign a user to the added proxy. If required, configure the user name to be sent to RADIUS by running the ipa user-mod radiususer --radius-username= radius_user command. As a result, the user OTP authentication will start to be processed through the RADIUS proxy server. Note To run a RADIUS server on an IdM master with FIPS mode enabled, additionally, perform the steps described in Section 22.3.2, "Required Settings for Configuring a RADIUS Proxy on an IdM Server Running in FIPS Mode" . When the user is ready to be migrated to the IdM native OTP system, you can simply remove the RADIUS proxy assignment for the user. 22.3.7.1. Changing the Timeout Value of a KDC When Running a RADIUS Server in a Slow Network In certain situations, such as running a RADIUS proxy in a slow network, the IdM KDC closes the connection before the RADIUS server responds because the connection timed out while waiting for the user to enter the token. To change the timeout settings of the KDC: Change the value of the timeout parameter in the [otp] section in the /var/kerberos/krb5kdc/kdc.conf file. For example, to set the timeout to 120 seconds: Restart the krb5kdc service: 22.3.8. Promoting the Current Credentials to Two-Factor Authentication If both password and two-factor authentication are configured, but you only authenticated using the password, you might be denied access to certain services or hosts (see Section 22.4, "Restricting Access to Services and Hosts Based on How Users Authenticate" ). In this situation, promote your credentials from one-factor to two-factor authentication by authenticating again: Lock your screen. The default keyboard shortcut to lock the screen is Super key + L . Unlock your screen. When asked for credentials, use both password and OTP. 22.3.9. Resynchronizing an OTP Token See Section B.4.3, "OTP Token Out of Sync" . 22.3.10. Replacing a Lost OTP Token The following procedure describes how a user who lost its OTP token can replace the token: As an administrator, enable password and OTP authentication for the user: The user can now add a new token. For example, to add a new token that has New Token set in the description: For further details, enter the command the ipa otptoken-add --help parameter added. The user can now delete the old token: Optionally, list the tokens associated with the account: Delete the old token. For example, to delete the token with the e1e9e1ef-172c-4fa9-b637-6b017ce79315 ID: As an administrator, enable only OTP authentication for the user: | [
"First Factor: Second Factor (optional):",
"First factor: static_password Second factor: one-time_password",
"First factor: static_password Second factor: one-time_password",
"[Service] Environment=OPENSSL_FIPS_NON_APPROVED_MD5_ALLOW=1",
"systemctl daemon-reload",
"systemctl start radiusd",
"ipa config-mod --user-auth-type=otp",
"ipa config-mod --user-auth-type=otp --user-auth-type=disabled",
"ipa user-mod user --user-auth-type=otp",
"ipa config-mod --user-auth-type=otp --user-auth-type=password",
"ipa otptoken-add ------------------ Added OTP token \"\" ------------------ Unique ID: 7060091b-4e40-47fd-8354-cb32fecd548a Type: TOTP",
"ipa otptoken-add-yubikey --slot=2",
"ipa otptoken-add --owner=user ------------------ Added OTP token \"\" ------------------ Unique ID: 5303baa8-08f9-464e-a74d-3b38de1c041d Type: TOTP",
"ipa otptoken-add-yubikey --owner=user",
"[otp] DEFAULT = { timeout = 120 }",
"systemctl restart krb5kdc",
"ipa user-mod --user-auth-type=password --user-auth-type=otp user_name",
"ipa otptoken-add --desc=\" New Token \"",
"ipa otptoken-find -------------------- 2 OTP tokens matched -------------------- Unique ID: 4ce8ec29-0bf7-4100-ab6d-5d26697f0d8f Type: TOTP Description: New Token Owner: user Unique ID: e1e9e1ef-172c-4fa9-b637-6b017ce79315 Type: TOTP Description: Old Token Owner: user ---------------------------- Number of entries returned 2 ----------------------------",
"# ipa otptoken-del e1e9e1ef-172c-4fa9-b637-6b017ce79315 -------------------------------------------------------- Deleted OTP token \" e1e9e1ef-172c-4fa9-b637-6b017ce79315 \" --------------------------------------------------------",
"ipa user-mod --user-auth-type=otp user_name"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/otp |
Chapter 1. OpenShift Container Platform security and compliance | Chapter 1. OpenShift Container Platform security and compliance 1.1. Security overview It is important to understand how to properly secure various aspects of your OpenShift Container Platform cluster. Container security A good starting point to understanding OpenShift Container Platform security is to review the concepts in Understanding container security . This and subsequent sections provide a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. These sections also include information on the following topics: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. Auditing OpenShift Container Platform auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. Administrators can configure the audit log policy and view audit logs . Certificates Certificates are used by various components to validate access to the cluster. Administrators can replace the default ingress certificate , add API server certificates , or add a service certificate . You can also review more details about the types of certificates used by the cluster: User-provided certificates for the API server Proxy certificates Service CA certificates Node certificates Bootstrap certificates etcd certificates OLM certificates Aggregated API client certificates Machine Config Operator certificates User-provided certificates for default ingress Ingress certificates Monitoring and cluster logging Operator component certificates Control plane certificates Encrypting data You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. Vulnerability scanning Administrators can use the Red Hat Quay Container Security Operator to run vulnerability scans and review information about detected vulnerabilities. 1.2. Compliance overview For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards, or the organization's corporate governance framework. Compliance checking Administrators can use the Compliance Operator to run compliance scans and recommend remediations for any issues found. The oc-compliance plugin is an OpenShift CLI ( oc ) plugin that provides a set of utilities to easily interact with the Compliance Operator. File integrity checking Administrators can use the File Integrity Operator to continually run file integrity checks on cluster nodes and provide a log of files that have been modified. 1.3. Additional resources Understanding authentication Configuring the internal OAuth server Understanding identity provider configuration Using RBAC to define and apply permissions Managing security context constraints | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/security-compliance-overview |
Building applications | Building applications OpenShift Dedicated 4 Configuring OpenShift Dedicated for your applications Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/index |
Chapter 4. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] | Chapter 4. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . webhooks array Webhooks is a list of webhooks and the affected resources and operations. webhooks[] object MutatingWebhook describes an admission webhook and the resources and operations it applies to. 4.1.1. .webhooks Description Webhooks is a list of webhooks and the affected resources and operations. Type array 4.1.2. .webhooks[] Description MutatingWebhook describes an admission webhook and the resources and operations it applies to. Type object Required name clientConfig sideEffects admissionReviewVersions Property Type Description admissionReviewVersions array (string) AdmissionReviewVersions is an ordered list of preferred AdmissionReview versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook failurePolicy string FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail. Possible enum values: - "Fail" means that an error calling the webhook causes the admission to fail. - "Ignore" means that an error calling the webhook is ignored. matchConditions array MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped matchConditions[] object MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. matchPolicy string matchPolicy defines how the "rules" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook. Defaults to "Equivalent" Possible enum values: - "Equivalent" means requests should be sent to the webhook if they modify a resource listed in rules via another API group or version. - "Exact" means requests should only be sent to the webhook if they exactly match a given rule. name string The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where "imagepolicy" is the name of the webhook, and kubernetes.io is the name of the organization. Required. namespaceSelector LabelSelector NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the webhook on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. reinvocationPolicy string reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are "Never" and "IfNeeded". Never: the webhook will not be called more than once in a single admission evaluation. IfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option must be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead. Defaults to "Never". Possible enum values: - "IfNeeded" indicates that the webhook may be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. - "Never" indicates that the webhook must not be called more than once in a single admission evaluation. rules array Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Possible enum values: - "None" means that calling the webhook will have no side effects. - "NoneOnDryRun" means that calling the webhook will possibly have side effects, but if the request being reviewed has the dry-run attribute, the side effects will be suppressed. - "Some" means that calling the webhook will possibly have side effects. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. - "Unknown" means that no information is known about the side effects of calling the webhook. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. timeoutSeconds integer TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds. 4.1.3. .webhooks[].clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 4.1.4. .webhooks[].clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path which will be sent in any request to this service. port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 4.1.5. .webhooks[].matchConditions Description MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped Type array 4.1.6. .webhooks[].matchConditions[] Description MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. Type object Required name expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables: 'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(/pkg/apis/admission/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/ Required. name string Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', ' ' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9 .]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName') Required. 4.1.7. .webhooks[].rules Description Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. Type array 4.1.8. .webhooks[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 4.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations DELETE : delete collection of MutatingWebhookConfiguration GET : list or watch objects of kind MutatingWebhookConfiguration POST : create a MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations GET : watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} DELETE : delete a MutatingWebhookConfiguration GET : read the specified MutatingWebhookConfiguration PATCH : partially update the specified MutatingWebhookConfiguration PUT : replace the specified MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} GET : watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations HTTP method DELETE Description delete collection of MutatingWebhookConfiguration Table 4.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind MutatingWebhookConfiguration Table 4.3. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a MutatingWebhookConfiguration Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 202 - Accepted MutatingWebhookConfiguration schema 401 - Unauthorized Empty 4.2.2. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations HTTP method GET Description watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 4.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} Table 4.8. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration HTTP method DELETE Description delete a MutatingWebhookConfiguration Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MutatingWebhookConfiguration Table 4.11. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MutatingWebhookConfiguration Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MutatingWebhookConfiguration Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty 4.2.4. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} Table 4.17. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration HTTP method GET Description watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extension_apis/mutatingwebhookconfiguration-admissionregistration-k8s-io-v1 |
Chapter 15. Image-based upgrade for single-node OpenShift clusters | Chapter 15. Image-based upgrade for single-node OpenShift clusters 15.1. Understanding the image-based upgrade for single-node OpenShift clusters From OpenShift Container Platform 4.14.13, the Lifecycle Agent provides you with an alternative way to upgrade the platform version of a single-node OpenShift cluster. The image-based upgrade is faster than the standard upgrade method and allows you to directly upgrade from OpenShift Container Platform <4.y> to <4.y+2>, and <4.y.z> to <4.y.z+n>. This upgrade method utilizes a generated OCI image from a dedicated seed cluster that is installed on the target single-node OpenShift cluster as a new ostree stateroot. A seed cluster is a single-node OpenShift cluster deployed with the target OpenShift Container Platform version, Day 2 Operators, and configurations that are common to all target clusters. You can use the seed image, which is generated from the seed cluster, to upgrade the platform version on any single-node OpenShift cluster that has the same combination of hardware, Day 2 Operators, and cluster configuration as the seed cluster. Important The image-based upgrade uses custom images that are specific to the hardware platform that the clusters are running on. Each different hardware platform requires a separate seed image. The Lifecycle Agent uses two custom resources (CRs) on the participating clusters to orchestrate the upgrade: On the seed cluster, the SeedGenerator CR allows for the seed image generation. This CR specifies the repository to push the seed image to. On the target cluster, the ImageBasedUpgrade CR specifies the seed image for the upgrade of the target cluster and the backup configurations for your workloads. Example SeedGenerator CR apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage spec: seedImage: <seed_image> Example ImageBasedUpgrade CR apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle 1 seedImageRef: 2 version: <target_version> image: <seed_container_image> pullSecretRef: name: <seed_pull_secret> autoRollbackOnFailure: {} # initMonitorTimeoutSeconds: 1800 3 extraManifests: 4 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 5 - name: oadp-cm-example namespace: openshift-adp 1 Stage of the ImageBasedUpgrade CR. The value can be Idle , Prep , Upgrade , or Rollback . 2 Target platform version, seed image to be used, and the secret required to access the image. 3 Optional: Time frame in seconds to roll back when the upgrade does not complete within that time frame after the first reboot. If not defined or set to 0 , the default value of 1800 seconds (30 minutes) is used. 4 Optional: List of ConfigMap resources that contain your custom catalog sources to retain after the upgrade, and your extra manifests to apply to the target cluster that are not part of the seed image. 5 List of ConfigMap resources that contain the OADP Backup and Restore CRs. 15.1.1. Stages of the image-based upgrade After generating the seed image on the seed cluster, you can move through the stages on the target cluster by setting the spec.stage field to one of the following values in the ImageBasedUpgrade CR: Idle Prep Upgrade Rollback (Optional) Figure 15.1. Stages of the image-based upgrade 15.1.1.1. Idle stage The Lifecycle Agent creates an ImageBasedUpgrade CR set to stage: Idle when the Operator is first deployed. This is the default stage. There is no ongoing upgrade and the cluster is ready to move to the Prep stage. Figure 15.2. Transition from Idle stage You also move to the Idle stage to do one of the following steps: Finalize a successful upgrade Finalize a rollback Cancel an ongoing upgrade until the pre-pivot phase in the Upgrade stage Moving to the Idle stage ensures that the Lifecycle Agent cleans up resources, so that the cluster is ready for upgrades again. Figure 15.3. Transitions to Idle stage Important If using RHACM when you cancel an upgrade, you must remove the import.open-cluster-management.io/disable-auto-import annotation from the target managed cluster to re-enable the automatic import of the cluster. 15.1.1.2. Prep stage Note You can complete this stage before a scheduled maintenance window. For the Prep stage, you specify the following upgrade details in the ImageBasedUpgrade CR: seed image to use resources to back up extra manifests to apply and custom catalog sources to retain after the upgrade, if any Then, based on what you specify, the Lifecycle Agent prepares for the upgrade without impacting the current running version. During this stage, the Lifecycle Agent ensures that the target cluster is ready to proceed to the Upgrade stage by checking if it meets certain conditions. The Operator pulls the seed image to the target cluster with additional container images specified in the seed image. The Lifecycle Agent checks if there is enough space on the container storage disk and if necessary, the Operator deletes unpinned images until the disk usage is below the specified threshold. For more information about how to configure or disable the cleaning up of the container storage disk, see "Configuring the automatic image cleanup of the container storage disk". You also prepare backup resources with the OADP Operator's Backup and Restore CRs. These CRs are used in the Upgrade stage to reconfigure the cluster, register the cluster with RHACM, and restore application artifacts. In addition to the OADP Operator, the Lifecycle Agent uses the ostree versioning system to create a backup, which allows complete cluster reconfiguration after both upgrade and rollback. After the Prep stage finishes, you can cancel the upgrade process by moving to the Idle stage or you can start the upgrade by moving to the Upgrade stage in the ImageBasedUpgrade CR. If you cancel the upgrade, the Operator performs cleanup operations. Figure 15.4. Transition from Prep stage 15.1.1.3. Upgrade stage The Upgrade stage consists of two phases: pre-pivot Just before pivoting to the new stateroot, the Lifecycle Agent collects the required cluster specific artifacts and stores them in the new stateroot. The backup of your cluster resources specified in the Prep stage are created on a compatible Object storage solution. The Lifecycle Agent exports CRs specified in the extraManifests field in the ImageBasedUpgrade CR or the CRs described in the ZTP policies that are bound to the target cluster. After pre-pivot phase has completed, the Lifecycle Agent sets the new stateroot deployment as the default boot entry and reboots the node. post-pivot After booting from the new stateroot, the Lifecycle Agent also regenerates the seed image's cluster cryptography. This ensures that each single-node OpenShift cluster upgraded with the same seed image has unique and valid cryptographic objects. The Operator then reconfigures the cluster by applying cluster-specific artifacts that were collected in the pre-pivot phase. The Operator applies all saved CRs, and restores the backups. After the upgrade has completed and you are satisfied with the changes, you can finalize the upgrade by moving to the Idle stage. Important When you finalize the upgrade, you cannot roll back to the original release. Figure 15.5. Transitions from Upgrade stage If you want to cancel the upgrade, you can do so until the pre-pivot phase of the Upgrade stage. If you encounter issues after the upgrade, you can move to the Rollback stage for a manual rollback. 15.1.1.4. Rollback stage The Rollback stage can be initiated manually or automatically upon failure. During the Rollback stage, the Lifecycle Agent sets the original ostree stateroot deployment as default. Then, the node reboots with the release of OpenShift Container Platform and application configurations. Warning If you move to the Idle stage after a rollback, the Lifecycle Agent cleans up resources that can be used to troubleshoot a failed upgrade. The Lifecycle Agent initiates an automatic rollback if the upgrade does not complete within a specified time limit. For more information about the automatic rollback, see the "Moving to the Rollback stage with Lifecycle Agent" or "Moving to the Rollback stage with Lifecycle Agent and GitOps ZTP" sections. Figure 15.6. Transition from Rollback stage Additional resources Configuring the automatic image cleanup of the container storage disk Performing an image-based upgrade for single-node OpenShift clusters with Lifecycle Agent Performing an image-based upgrade for single-node OpenShift clusters using GitOps ZTP 15.1.2. Guidelines for the image-based upgrade For a successful image-based upgrade, your deployments must meet certain requirements. There are different deployment methods in which you can perform the image-based upgrade: GitOps ZTP You use the GitOps Zero Touch Provisioning (ZTP) to deploy and configure your clusters. Non-GitOps You manually deploy and configure your clusters. You can perform an image-based upgrade in disconnected environments. For more information about how to mirror images for a disconnected environment, see "Mirroring images for a disconnected installation". Additional resources Mirroring images for a disconnected installation 15.1.2.1. Minimum software version of components Depending on your deployment method, the image-based upgrade requires the following minimum software versions. Table 15.1. Minimum software version of components Component Software version Required Lifecycle Agent 4.16 Yes OADP Operator 1.4.1 Yes Managed cluster version 4.14.13 Yes Hub cluster version 4.16 No RHACM 2.10.2 No GitOps ZTP plugin 4.16 Only for GitOps ZTP deployment method Red Hat OpenShift GitOps 1.12 Only for GitOps ZTP deployment method Topology Aware Lifecycle Manager (TALM) 4.16 Only for GitOps ZTP deployment method Local Storage Operator [1] 4.14 Yes Logical Volume Manager (LVM) Storage [1] 4.14.2 Yes The persistent storage must be provided by either the LVM Storage or the Local Storage Operator, not both. 15.1.2.2. Hub cluster guidelines If you are using Red Hat Advanced Cluster Management (RHACM), your hub cluster needs to meet the following conditions: To avoid including any RHACM resources in your seed image, you need to disable all optional RHACM add-ons before generating the seed image. Your hub cluster must be upgraded to at least the target version before performing an image-based upgrade on a target single-node OpenShift cluster. 15.1.2.3. Seed image guidelines The seed image targets a set of single-node OpenShift clusters with the same hardware and similar configuration. This means that the seed cluster must match the configuration of the target clusters for the following items: CPU topology Number of CPU cores Tuned performance configuration, such as number of reserved CPUs MachineConfig resources for the target cluster IP version Note Dual-stack networking is not supported in this release. Set of Day 2 Operators, including the Lifecycle Agent and the OADP Operator Disconnected registry FIPS configuration The following configurations only have to partially match on the participating clusters: If the target cluster has a proxy configuration, the seed cluster must have a proxy configuration too but the configuration does not have to be the same. A dedicated partition on the primary disk for container storage is required on all participating clusters. However, the size and start of the partition does not have to be the same. Only the spec.config.storage.disks.partitions.label: varlibcontainers label in the MachineConfig CR must match on both the seed and target clusters. For more information about how to create the disk partition, see "Configuring a shared container partition between ostree stateroots" or "Configuring a shared container partition between ostree stateroots when using GitOps ZTP". For more information about what to include in the seed image, see "Seed image configuration" and "Seed image configuration using the RAN DU profile". Additional resources Configuring a shared container partition between ostree stateroots Configuring a shared container partition between ostree stateroots when using GitOps ZTP Seed image configuration 15.1.2.4. OADP backup and restore guidelines With the OADP Operator, you can back up and restore your applications on your target clusters by using Backup and Restore CRs wrapped in ConfigMap objects. The application must work on the current and the target OpenShift Container Platform versions so that they can be restored after the upgrade. The backups must include resources that were initially created. The following resources must be excluded from the backup: pods endpoints controllerrevision podmetrics packagemanifest replicaset localvolume , if using Local Storage Operator (LSO) There are two local storage implementations for single-node OpenShift: Local Storage Operator (LSO) The Lifecycle Agent automatically backs up and restores the required artifacts, including localvolume resources and their associated StorageClass resources. You must exclude the persistentvolumes resource in the application Backup CR. LVM Storage You must create the Backup and Restore CRs for LVM Storage artifacts. You must include the persistentVolumes resource in the application Backup CR. For the image-based upgrade, only one Operator is supported on a given target cluster. Important For both Operators, you must not apply the Operator CRs as extra manifests through the ImageBasedUpgrade CR. The persistent volume contents are preserved and used after the pivot. When you are configuring the DataProtectionApplication CR, you must ensure that the .spec.configuration.restic.enable is set to false for an image-based upgrade. This disables Container Storage Interface integration. 15.1.2.4.1. lca.openshift.io/apply-wave guidelines The lca.openshift.io/apply-wave annotation determines the apply order of Backup or Restore CRs. The value of the annotation must be a string number. If you define the lca.openshift.io/apply-wave annotation in the Backup or Restore CRs, they are applied in increasing order based on the annotation value. If you do not define the annotation, they are applied together. The lca.openshift.io/apply-wave annotation must be numerically lower in your platform Restore CRs, for example RHACM and LVM Storage artifacts, than that of the application. This way, the platform artifacts are restored before your applications. If your application includes cluster-scoped resources, you must create separate Backup and Restore CRs to scope the backup to the specific cluster-scoped resources created by the application. The Restore CR for the cluster-scoped resources must be restored before the remaining application Restore CR(s). 15.1.2.4.2. lca.openshift.io/apply-label guidelines You can back up specific resources exclusively with the lca.openshift.io/apply-label annotation. Based on which resources you define in the annotation, the Lifecycle Agent applies the lca.openshift.io/backup: <backup_name> label and adds the labelSelector.matchLabels.lca.openshift.io/backup: <backup_name> label selector to the specified resources when creating the Backup CRs. To use the lca.openshift.io/apply-label annotation for backing up specific resources, the resources listed in the annotation must also be included in the spec section. If the lca.openshift.io/apply-label annotation is used in the Backup CR, only the resources listed in the annotation are backed up, even if other resource types are specified in the spec section or not. Example CR apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet namespace: openshift-adp annotations: lca.openshift.io/apply-label: rbac.authorization.k8s.io/v1/clusterroles/klusterlet,apps/v1/deployments/open-cluster-management-agent/klusterlet 1 labels: velero.io/storage-location: default spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - clusterroles includedNamespaceScopedResources: - deployments 1 The value must be a list of comma-separated objects in group/version/resource/name format for cluster-scoped resources or group/version/resource/namespace/name format for namespace-scoped resources, and it must be attached to the related Backup CR. 15.1.2.5. Extra manifest guidelines The Lifecycle Agent uses extra manifests to restore your target clusters after rebooting with the new stateroot deployment and before restoring application artifacts. Different deployment methods require a different way to apply the extra manifests: GitOps ZTP You use the lca.openshift.io/target-ocp-version: <target_ocp_version> label to mark the extra manifests that the Lifecycle Agent must extract and apply after the pivot. You can specify the number of manifests labeled with lca.openshift.io/target-ocp-version by using the lca.openshift.io/target-ocp-version-manifest-count annotation in the ImageBasedUpgrade CR. If specified, the Lifecycle Agent verifies that the number of manifests extracted from policies matches the number provided in the annotation during the prep and upgrade stages. Example for the lca.openshift.io/target-ocp-version-manifest-count annotation apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: annotations: lca.openshift.io/target-ocp-version-manifest-count: "5" name: upgrade Non-Gitops You mark your extra manifests with the lca.openshift.io/apply-wave annotation to determine the apply order. The labeled extra manifests are wrapped in ConfigMap objects and referenced in the ImageBasedUpgrade CR that the Lifecycle Agent uses after the pivot. If the target cluster uses custom catalog sources, you must include them as extra manifests that point to the correct release version. Important You cannot apply the following items as extra manifests: MachineConfig objects OLM Operator subscriptions Additional resources Performing an image-based upgrade for single-node OpenShift clusters with Lifecycle Agent Preparing the hub cluster for ZTP Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent Creating ConfigMap objects for the image-based upgrade with GitOps ZTP About installing OADP 15.2. Preparing for an image-based upgrade for single-node OpenShift clusters 15.2.1. Configuring a shared container partition for the image-based upgrade Your single-node OpenShift clusters need to have a shared /var/lib/containers partition for the image-based upgrade. You can do this at install time. 15.2.1.1. Configuring a shared container partition between ostree stateroots Apply a MachineConfig to both the seed and the target clusters during installation time to create a separate partition and share the /var/lib/containers partition between the two ostree stateroots that will be used during the upgrade process. Important You must complete this procedure at installation time. Procedure Apply a MachineConfig to create a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-containers-partitioned spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 partitions: - label: var-lib-containers startMiB: <start_of_partition> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var-lib-containers format: xfs mountOptions: - defaults - prjquota path: /var/lib/containers wipeFilesystem: true systemd: units: - contents: |- # Generated by Butane [Unit] Before=local-fs.target Requires=systemd-fsck@dev-disk-by\x2dpartlabel-var\x2dlib\x2dcontainers.service After=systemd-fsck@dev-disk-by\x2dpartlabel-var\x2dlib\x2dcontainers.service [Mount] Where=/var/lib/containers What=/dev/disk/by-partlabel/var-lib-containers Type=xfs Options=defaults,prjquota [Install] RequiredBy=local-fs.target enabled: true name: var-lib-containers.mount 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation will fail. 3 Specify a minimum size for the partition of 500 GB to ensure adequate disk space for precached images. If the value is too small, the deployments after installation will fail. 15.2.1.2. Configuring a shared container directory between ostree stateroots when using GitOps ZTP When you are using the GitOps Zero Touch Provisioning (ZTP) workflow, you do the following procedure to create a separate disk partition on both the seed and target cluster and to share the /var/lib/containers partition. Important You must complete this procedure at installation time. Prerequisites You have installed Butane. For more information, see "Installing Butane". Procedure Create the storage.bu file: variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation will fail. 3 Specify a minimum size for the partition of 500 GB to ensure adequate disk space for precached images. If the value is too small, the deployments after installation will fail. Convert the storage.bu to an Ignition file by running the following command: USD butane storage.bu Example output {"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}} Copy the output into the .spec.clusters.nodes.ignitionConfigOverride field in the SiteConfig CR: [...] spec: clusters: - nodes: - hostName: <name> ignitionConfigOverride: '{"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}}' [...] Verification During or after installation, verify on the hub cluster that the BareMetalHost object shows the annotation by running the following command: USD oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"]' Example output "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}" After installation, check the single-node OpenShift disk status by running the following commands: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers # df -h Example output Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000 Additional resources Installing Butane 15.2.2. Installing Operators for the image-based upgrade Prepare your clusters for the upgrade by installing the Lifecycle Agent and the OADP Operator. To install the OADP Operator with the non-GitOps method, see "Installing the OADP Operator". Additional resources Installing the OADP Operator About backup and snapshot locations and their secrets Creating a Backup CR Creating a Restore CR 15.2.2.1. Installing the Lifecycle Agent by using the CLI You can use the OpenShift CLI ( oc ) to install the Lifecycle Agent. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a Namespace object YAML file for the Lifecycle Agent: apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management Create the Namespace CR by running the following command: USD oc create -f <namespace_filename>.yaml Create an OperatorGroup object YAML file for the Lifecycle Agent: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-lifecycle-agent namespace: openshift-lifecycle-agent spec: targetNamespaces: - openshift-lifecycle-agent Create the OperatorGroup CR by running the following command: USD oc create -f <operatorgroup_filename>.yaml Create a Subscription CR for the Lifecycle Agent: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-lifecycle-agent-subscription namespace: openshift-lifecycle-agent spec: channel: "stable" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f <subscription_filename>.yaml Verification To verify that the installation succeeded, inspect the CSV resource by running the following command: USD oc get csv -n openshift-lifecycle-agent Example output NAME DISPLAY VERSION REPLACES PHASE lifecycle-agent.v4.17.0 Openshift Lifecycle Agent 4.17.0 Succeeded Verify that the Lifecycle Agent is up and running by running the following command: USD oc get deploy -n openshift-lifecycle-agent Example output NAME READY UP-TO-DATE AVAILABLE AGE lifecycle-agent-controller-manager 1/1 1 1 14s 15.2.2.2. Installing the Lifecycle Agent by using the web console You can use the OpenShift Container Platform web console to install the Lifecycle Agent. Prerequisites You have logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Lifecycle Agent from the list of available Operators, and then click Install . On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent . Click Install . Verification To confirm that the installation is successful: Click Operators Installed Operators . Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator is not installed successfully: Click Operators Installed Operators , and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Click Workloads Pods , and check the logs for pods in the openshift-lifecycle-agent project. 15.2.2.3. Installing the Lifecycle Agent with GitOps ZTP Install the Lifecycle Agent with GitOps Zero Touch Provisioning (ZTP) to do an image-based upgrade. Procedure Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory: Example LcaSubscriptionNS.yaml file apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management ran.openshift.io/ztp-deploy-wave: "2" labels: kubernetes.io/metadata.name: openshift-lifecycle-agent Example LcaSubscriptionOperGroup.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent-operatorgroup namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: targetNamespaces: - openshift-lifecycle-agent Example LcaSubscription.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: "stable" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Example directory structure ├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── LcaSubscriptionNS.yaml │ ├── LcaSubscriptionOperGroup.yaml │ ├── LcaSubscription.yaml Add the CRs to your common PolicyGenTemplate : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-common-latest" namespace: "ztp-common" spec: bindingRules: common: "true" du-profile: "latest" sourceFiles: - fileName: LcaSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: LcaSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: LcaSubscription.yaml policyName: "subscriptions-policy" [...] 15.2.2.4. Installing and configuring the OADP Operator with GitOps ZTP Install and configure the OADP Operator with GitOps ZTP before starting the upgrade. Procedure Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory: Example OadpSubscriptionNS.yaml file apiVersion: v1 kind: Namespace metadata: name: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" labels: kubernetes.io/metadata.name: openshift-adp Example OadpSubscriptionOperGroup.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: targetNamespaces: - openshift-adp Example OadpSubscription.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: stable-1.4 name: redhat-oadp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown Example OadpOperatorStatus.yaml file apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: redhat-oadp-operator.openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" status: components: refs: - kind: Subscription namespace: openshift-adp conditions: - type: CatalogSourcesUnhealthy status: "False" - kind: InstallPlan namespace: openshift-adp conditions: - type: Installed status: "True" - kind: ClusterServiceVersion namespace: openshift-adp conditions: - type: Succeeded status: "True" reason: InstallSucceeded Example directory structure ├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── OadpSubscriptionNS.yaml │ ├── OadpSubscriptionOperGroup.yaml │ ├── OadpSubscription.yaml │ ├── OadpOperatorStatus.yaml Add the CRs to your common PolicyGenTemplate : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-common-latest" namespace: "ztp-common" spec: bindingRules: common: "true" du-profile: "latest" sourceFiles: - fileName: OadpSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: OadpSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: OadpSubscription.yaml policyName: "subscriptions-policy" - fileName: OadpOperatorStatus.yaml policyName: "subscriptions-policy" [...] Create the DataProtectionApplication CR and the S3 secret only for the target cluster: Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory: Example DataProtectionApplication.yaml file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dataprotectionapplication namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: configuration: restic: enable: false 1 velero: defaultPlugins: - aws - openshift resourceTimeout: 10m backupLocations: - velero: config: profile: "default" region: minio s3Url: USDurl insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: USDbucketName 2 prefix: USDprefixName 3 status: conditions: - reason: Complete status: "True" type: Reconciled 1 The spec.configuration.restic.enable field must be set to false for an image-based upgrade because persistent volume contents are retained and reused after the upgrade. 2 3 The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}} . Example OadpSecret.yaml file apiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" type: Opaque Example OadpBackupStorageLocationStatus.yaml file apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" status: phase: Available The OadpBackupStorageLocationStatus.yaml CR verifies the availability of backup storage locations created by OADP. Add the CRs to your site PolicyGenTemplate with overrides: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-cnf" namespace: "ztp-site" spec: bindingRules: sites: "example-cnf" du-profile: "latest" mcp: "master" sourceFiles: ... - fileName: OadpSecret.yaml policyName: "config-policy" data: cloud: <your_credentials> 1 - fileName: DataProtectionApplication.yaml policyName: "config-policy" spec: backupLocations: - velero: config: region: minio s3Url: <your_S3_URL> 2 profile: "default" insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <your_bucket_name> 3 prefix: <cluster_name> 4 - fileName: OadpBackupStorageLocationStatus.yaml policyName: "config-policy" 1 Specify your credentials for your S3 storage backend. 2 Specify the URL for your S3-compatible bucket. 3 4 The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket . The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}} . 15.2.3. Generating a seed image for the image-based upgrade with the Lifecycle Agent Use the Lifecycle Agent to generate the seed image with the SeedGenerator custom resource (CR). 15.2.3.1. Seed image configuration The seed image targets a set of single-node OpenShift clusters with the same hardware and similar configuration. This means that the seed image must have all of the components and configuration that the seed cluster shares with the target clusters. Therefore, the seed image generated from the seed cluster cannot contain any cluster-specific configuration. The following table lists the components, resources, and configurations that you must and must not include in your seed image: Table 15.2. Seed image configuration Cluster configuration Include in seed image Performance profile Yes MachineConfig resources for the target cluster Yes IP version [1] Yes Set of Day 2 Operators, including the Lifecycle Agent and the OADP Operator Yes Disconnected registry configuration [2] Yes Valid proxy configuration [3] Yes FIPS configuration Yes Dedicated partition on the primary disk for container storage that matches the size of the target clusters Yes Local volumes StorageClass used in LocalVolume for LSO LocalVolume for LSO LVMCluster CR for LVMS No OADP DataProtectionApplication CR No Dual-stack networking is not supported in this release. If the seed cluster is installed in a disconnected environment, the target clusters must also be installed in a disconnected environment. The proxy configuration on the seed and target clusters does not have to match. 15.2.3.1.1. Seed image configuration using the RAN DU profile The following table lists the components, resources, and configurations that you must and must not include in the seed image when using the RAN DU profile: Table 15.3. Seed image configuration with RAN DU profile Resource Include in seed image All extra manifests that are applied as part of Day 0 installation Yes All Day 2 Operator subscriptions Yes DisableOLMPprof.yaml Yes TunedPerformancePatch.yaml Yes PerformanceProfile.yaml Yes SriovOperatorConfig.yaml Yes DisableSnoNetworkDiag.yaml Yes StorageClass.yaml No, if it is used in StorageLV.yaml StorageLV.yaml No StorageLVMCluster.yaml No Table 15.4. Seed image configuration with RAN DU profile for extra manifests Resource Apply as extra manifest ClusterLogForwarder.yaml Yes ReduceMonitoringFootprint.yaml Yes SriovFecClusterConfig.yaml Yes PtpOperatorConfigForEvent.yaml Yes DefaultCatsrc.yaml Yes PtpConfig.yaml If the interfaces of the target cluster are common with the seed cluster, you can include them in the seed image. Otherwise, apply it as extra manifests. SriovNetwork.yaml SriovNetworkNodePolicy.yaml If the configuration, including namespaces, is exactly the same on both the seed and target cluster, you can include them in the seed image. Otherwise, apply them as extra manifests. 15.2.3.2. Generating a seed image with the Lifecycle Agent Use the Lifecycle Agent to generate a seed image from a managed cluster. The Operator checks for required system configurations, performs any necessary system cleanup before generating the seed image, and launches the image generation. The seed image generation includes the following tasks: Stopping cluster Operators Preparing the seed image configuration Generating and pushing the seed image to the image repository specified in the SeedGenerator CR Restoring cluster Operators Expiring seed cluster certificates Generating new certificates for the seed cluster Restoring and updating the SeedGenerator CR on the seed cluster Prerequisites RHACM and multicluster engine for Kubernetes Operator are not installed on the seed cluster. You have configured a shared container directory on the seed cluster. You have installed the minimum version of the OADP Operator and the Lifecycle Agent on the seed cluster. Ensure that persistent volumes are not configured on the seed cluster. Ensure that the LocalVolume CR does not exist on the seed cluster if the Local Storage Operator is used. Ensure that the LVMCluster CR does not exist on the seed cluster if LVM Storage is used. Ensure that the DataProtectionApplication CR does not exist on the seed cluster if OADP is used. Procedure Detach the managed cluster from the hub to delete any RHACM-specific resources from the seed cluster that must not be in the seed image: Manually detach the seed cluster by running the following command: USD oc delete managedcluster sno-worker-example Wait until the managed cluster is removed. After the cluster is removed, create the proper SeedGenerator CR. The Lifecycle Agent cleans up the RHACM artifacts. If you are using GitOps ZTP, detach your cluster by removing the seed cluster's SiteConfig CR from the kustomization.yaml . If you have a kustomization.yaml file that references multiple SiteConfig CRs, remove your seed cluster's SiteConfig CR from the kustomization.yaml : apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: #- example-seed-sno1.yaml - example-target-sno2.yaml - example-target-sno3.yaml If you have a kustomization.yaml that references one SiteConfig CR, remove your seed cluster's SiteConfig CR from the kustomization.yaml and add the generators: {} line: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: {} Commit the kustomization.yaml changes in your Git repository and push the changes to your repository. The ArgoCD pipeline detects the changes and removes the managed cluster. Create the Secret object so that you can push the seed image to your registry. Create the authentication file by running the following commands: USD MY_USER=myuserid USD AUTHFILE=/tmp/my-auth.json USD podman login --authfile USD{AUTHFILE} -u USD{MY_USER} quay.io/USD{MY_USER} USD base64 -w 0 USD{AUTHFILE} ; echo Copy the output into the seedAuth field in the Secret YAML file named seedgen in the openshift-lifecycle-agent namespace: apiVersion: v1 kind: Secret metadata: name: seedgen 1 namespace: openshift-lifecycle-agent type: Opaque data: seedAuth: <encoded_AUTHFILE> 2 1 The Secret resource must have the name: seedgen and namespace: openshift-lifecycle-agent fields. 2 Specifies a base64-encoded authfile for write-access to the registry for pushing the generated seed images. Apply the Secret by running the following command: USD oc apply -f secretseedgenerator.yaml Create the SeedGenerator CR: apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage 1 spec: seedImage: <seed_container_image> 2 1 The SeedGenerator CR must be named seedimage . 2 Specify the container image URL, for example, quay.io/example/seed-container-image:<tag> . It is recommended to use the <seed_cluster_name>:<ocp_version> format. Generate the seed image by running the following command: USD oc apply -f seedgenerator.yaml Important The cluster reboots and loses API capabilities while the Lifecycle Agent generates the seed image. Applying the SeedGenerator CR stops the kubelet and the CRI-O operations, then it starts the image generation. If you want to generate more seed images, you must provision a new seed cluster with the version that you want to generate a seed image from. Verification After the cluster recovers and it is available, you can check the status of the SeedGenerator CR by running the following command: USD oc get seedgenerator -o yaml Example output status: conditions: - lastTransitionTime: "2024-02-13T21:24:26Z" message: Seed Generation completed observedGeneration: 1 reason: Completed status: "False" type: SeedGenInProgress - lastTransitionTime: "2024-02-13T21:24:26Z" message: Seed Generation completed observedGeneration: 1 reason: Completed status: "True" type: SeedGenCompleted 1 observedGeneration: 1 1 The seed image generation is complete. Additional resources Configuring a shared container partition between ostree stateroots Configuring a shared container partition between ostree stateroots when using GitOps ZTP 15.2.4. Creating ConfigMap objects for the image-based upgrade with the Lifecycle Agent The Lifecycle Agent needs all your OADP resources, extra manifests, and custom catalog sources wrapped in a ConfigMap object to process them for the image-based upgrade. 15.2.4.1. Creating OADP ConfigMap objects for the image-based upgrade with Lifecycle Agent Create your OADP resources that are used to back up and restore your resources during the upgrade. Prerequisites You have generated a seed image from a compatible seed cluster. You have created OADP backup and restore resources. You have created a separate partition on the target cluster for the container images that is shared between stateroots. For more information, see "Configuring a shared container partition for the image-based upgrade". You have deployed a version of Lifecycle Agent that is compatible with the version used with the seed image. You have installed the OADP Operator, the DataProtectionApplication CR, and its secret on the target cluster. You have created an S3-compatible storage solution and a ready-to-use bucket with proper credentials configured. For more information, see "About installing OADP". Procedure Create the OADP Backup and Restore CRs for platform artifacts in the same namespace where the OADP Operator is installed, which is openshift-adp . If the target cluster is managed by RHACM, add the following YAML file for backing up and restoring RHACM artifacts: PlatformBackupRestore.yaml for RHACM apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: "apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "1" spec: backupName: acm-klusterlet 1 If your multiclusterHub CR does not have .spec.imagePullSecret defined and the secret does not exist on the open-cluster-management-agent namespace in your hub cluster, remove v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials . If you created persistent volumes on your cluster through LVM Storage, add the following YAML file for LVM Storage artifacts: PlatformBackupRestoreLvms.yaml for LVM Storage apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "2" 1 spec: backupName: lvmcluster 1 The lca.openshift.io/apply-wave value must be lower than the values specified in the application Restore CRs. If you need to restore applications after the upgrade, create the OADP Backup and Restore CRs for your application in the openshift-adp namespace. Create the OADP CRs for cluster-scoped application artifacts in the openshift-adp namespace. Example OADP CRs for cluster-scoped application artifacts for LSO and LVM Storage apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: "apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" 2 spec: backupName: backup-app-cluster-resources 1 Replace the example resource name with your actual resources. 2 The lca.openshift.io/apply-wave value must be higher than the value in the platform Restore CRs and lower than the value in the application namespace-scoped Restore CR. Create the OADP CRs for your namespace-scoped application artifacts. Example OADP CRs namespace-scoped application artifacts when LSO is used apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app 1 Define custom resources for your application. Example OADP CRs namespace-scoped application artifacts when LVM Storage is used apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5 1 Define custom resources for your application. 2 Required field. 3 Required field 4 Optional if you use LVM Storage volume snapshots. 5 Required field. Important The same version of the applications must function on both the current and the target release of OpenShift Container Platform. Create the ConfigMap object for your OADP CRs by running the following command: USD oc create configmap oadp-cm-example --from-file=example-oadp-resources.yaml=<path_to_oadp_crs> -n openshift-adp Patch the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade \ -p='{"spec": {"oadpContent": [{"name": "oadp-cm-example", "namespace": "openshift-adp"}]}}' \ --type=merge -n openshift-lifecycle-agent Additional resources Configuring a shared container partition between ostree stateroots About installing OADP 15.2.4.2. Creating ConfigMap objects of extra manifests for the image-based upgrade with Lifecycle Agent Create additional manifests that you want to apply to the target cluster. Note If you add more than one extra manifest, and the manifests must be applied in a specific order, you must prefix the filenames of the manifests with numbers that represent the required order. For example, 00-namespace.yaml , 01-sriov-extra-manifest.yaml , and so on. Procedure Create a YAML file that contains your extra manifests, such as SR-IOV. Example SR-IOV resources apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: "example-sriov-node-policy" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: "" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "example-sriov-network" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: "on" trust: "off" Create the ConfigMap object by running the following command: USD oc create configmap example-extra-manifests-cm --from-file=example-extra-manifests.yaml=<path_to_extramanifest> -n openshift-lifecycle-agent Patch the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade \ -p='{"spec": {"extraManifests": [{"name": "example-extra-manifests-cm", "namespace": "openshift-lifecycle-agent"}]}}' \ --type=merge -n openshift-lifecycle-agent 15.2.4.3. Creating ConfigMap objects of custom catalog sources for the image-based upgrade with Lifecycle Agent You can keep your custom catalog sources after the upgrade by generating a ConfigMap object for your catalog sources and adding them to the spec.extraManifest field in the ImageBasedUpgrade CR. For more information about catalog sources, see "Catalog source". Procedure Create a YAML file that contains the CatalogSource CR: apiVersion: operators.coreos.com/v1 kind: CatalogSource metadata: name: example-catalogsources namespace: openshift-marketplace spec: sourceType: grpc displayName: disconnected-redhat-operators image: quay.io/example-org/example-catalog:v1 Create the ConfigMap object by running the following command: USD oc create configmap example-catalogsources-cm --from-file=example-catalogsources.yaml=<path_to_catalogsource_cr> -n openshift-lifecycle-agent Patch the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade \ -p='{"spec": {"extraManifests": [{"name": "example-catalogsources-cm", "namespace": "openshift-lifecycle-agent"}]}}' \ --type=merge -n openshift-lifecycle-agent Additional resources Catalog source Performing an image-based upgrade for single-node OpenShift with Lifecycle Agent 15.2.5. Creating ConfigMap objects for the image-based upgrade with the Lifecycle Agent using GitOps ZTP Create your OADP resources, extra manifests, and custom catalog sources wrapped in a ConfigMap object to prepare for the image-based upgrade. 15.2.5.1. Creating OADP resources for the image-based upgrade with GitOps ZTP Prepare your OADP resources to restore your application after an upgrade. Prerequisites You have provisioned one or more managed clusters with GitOps ZTP. You have logged in as a user with cluster-admin privileges. You have generated a seed image from a compatible seed cluster. You have created a separate partition on the target cluster for the container images that is shared between stateroots. For more information, see "Configuring a shared container partition between ostree stateroots when using GitOps ZTP". You have deployed a version of Lifecycle Agent that is compatible with the version used with the seed image. You have installed the OADP Operator, the DataProtectionApplication CR, and its secret on the target cluster. You have created an S3-compatible storage solution and a ready-to-use bucket with proper credentials configured. For more information, see "Installing and configuring the OADP Operator with GitOps ZTP". The openshift-adp namespace for the OADP ConfigMap object must exist on all managed clusters and the hub for the OADP ConfigMap to be generated and copied to the clusters. Procedure Ensure that your Git repository that you use with the ArgoCD policies application contains the following directory structure: ├── source-crs/ │ ├── ibu/ │ │ ├── ImageBasedUpgrade.yaml │ │ ├── PlatformBackupRestore.yaml │ │ ├── PlatformBackupRestoreLvms.yaml │ │ ├── PlatformBackupRestoreWithIBGU.yaml ├── ... ├── kustomization.yaml The source-crs/ibu/PlatformBackupRestoreWithIBGU.yaml file is provided in the ZTP container image. PlatformBackupRestoreWithIBGU.yaml apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: "apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-work:ibu-role,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "1" spec: backupName: acm-klusterlet 1 If your multiclusterHub CR does not have .spec.imagePullSecret defined and the secret does not exist on the open-cluster-management-agent namespace in your hub cluster, remove v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials . Note If you perform the image-based upgrade directly on managed clusters, use the PlatformBackupRestore.yaml file. If you use LVM Storage to create persistent volumes, you can use the source-crs/ibu/PlatformBackupRestoreLvms.yaml provided in the ZTP container image to back up your LVM Storage resources. PlatformBackupRestoreLvms.yaml apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "2" 1 spec: backupName: lvmcluster 1 The lca.openshift.io/apply-wave value must be lower than the values specified in the application Restore CRs. If you need to restore applications after the upgrade, create the OADP Backup and Restore CRs for your application in the openshift-adp namespace: Create the OADP CRs for cluster-scoped application artifacts in the openshift-adp namespace: Example OADP CRs for cluster-scoped application artifacts for LSO and LVM Storage apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: "apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" 2 spec: backupName: backup-app-cluster-resources 1 Replace the example resource name with your actual resources. 2 The lca.openshift.io/apply-wave value must be higher than the value in the platform Restore CRs and lower than the value in the application namespace-scoped Restore CR. Create the OADP CRs for your namespace-scoped application artifacts in the source-crs/custom-crs directory: Example OADP CRs namespace-scoped application artifacts when LSO is used apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app 1 Define custom resources for your application. Example OADP CRs namespace-scoped application artifacts when LVM Storage is used apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5 1 Define custom resources for your application. 2 Required field. 3 Required field 4 Optional if you use LVM Storage volume snapshots. 5 Required field. Important The same version of the applications must function on both the current and the target release of OpenShift Container Platform. Create a kustomization.yaml with the following content: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: 1 - files: - source-crs/ibu/PlatformBackupRestoreWithIBGU.yaml #- source-crs/custom-crs/ApplicationClusterScopedBackupRestore.yaml #- source-crs/custom-crs/ApplicationApplicationBackupRestoreLso.yaml name: oadp-cm namespace: openshift-adp 2 generatorOptions: disableNameSuffixHash: true 1 Creates the oadp-cm ConfigMap object on the hub cluster with Backup and Restore CRs. 2 The namespace must exist on all managed clusters and the hub for the OADP ConfigMap to be generated and copied to the clusters. Push the changes to your Git repository. Additional resources Configuring a shared container partition between ostree stateroots when using GitOps ZTP Installing and configuring the OADP Operator with GitOps ZTP 15.2.5.2. Labeling extra manifests for the image-based upgrade with GitOps ZTP Label your extra manifests so that the Lifecycle Agent can extract resources that are labeled with the lca.openshift.io/target-ocp-version: <target_version> label. Prerequisites You have provisioned one or more managed clusters with GitOps ZTP. You have logged in as a user with cluster-admin privileges. You have generated a seed image from a compatible seed cluster. You have created a separate partition on the target cluster for the container images that is shared between stateroots. For more information, see "Configuring a shared container directory between ostree stateroots when using GitOps ZTP". You have deployed a version of Lifecycle Agent that is compatible with the version used with the seed image. Procedure Label your required extra manifests with the lca.openshift.io/target-ocp-version: <target_version> label in your existing site PolicyGenTemplate CR: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: example-sno spec: bindingRules: sites: "example-sno" du-profile: "4.15" mcp: "master" sourceFiles: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-fh" labels: lca.openshift.io/target-ocp-version: "4.15" 1 spec: resourceName: du_fh vlan: 140 - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-fh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: deviceType: netdevice isRdma: false nicSelector: pfNames: ["ens5f0"] numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: resourceName: du_mh vlan: 150 - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-mh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: ["ens7f0"] numVfs: 8 priority: 10 resourceName: du_mh - fileName: DefaultCatsrc.yaml 2 policyName: "config-policy" metadata: name: default-cat-source namespace: openshift-marketplace labels: lca.openshift.io/target-ocp-version: "4.15" spec: displayName: default-cat-source image: quay.io/example-org/example-catalog:v1 1 Ensure that the lca.openshift.io/target-ocp-version label matches either the y-stream or the z-stream of the target OpenShift Container Platform version that is specified in the spec.seedImageRef.version field of the ImageBasedUpgrade CR. The Lifecycle Agent only applies the CRs that match the specified version. 2 If you do not want to use custom catalog sources, remove this entry. Push the changes to your Git repository. Additional resources Configuring a shared container partition between ostree stateroots when using GitOps ZTP Performing an image-based upgrade for single-node OpenShift clusters using GitOps ZTP 15.2.6. Configuring the automatic image cleanup of the container storage disk Configure when the Lifecycle Agent cleans up unpinned images in the Prep stage by setting a minimum threshold for available storage space through annotations. The default container storage disk usage threshold is 50%. The Lifecycle Agent does not delete images that are pinned in CRI-O or are currently used. The Operator selects the images for deletion by starting with dangling images and then sorting the images from oldest to newest that is determined by the image Created timestamp. 15.2.6.1. Configuring the automatic image cleanup of the container storage disk Configure the minimum threshold for available storage space through annotations. Prerequisites You have created an ImageBasedUpgrade CR. Procedure Increase the threshold to 65% by running the following command: USD oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent='65' (Optional) Remove the threshold override by running the following command: USD oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent- 15.2.6.2. Disable the automatic image cleanup of the container storage disk Disable the automatic image cleanup threshold. Procedure Disable the automatic image cleanup by running the following command: USD oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep='Disabled' (Optional) Enable automatic image cleanup again by running the following command: USD oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep- 15.3. Performing an image-based upgrade for single-node OpenShift clusters with the Lifecycle Agent You can use the Lifecycle Agent to do a manual image-based upgrade of a single-node OpenShift cluster. When you deploy the Lifecycle Agent on a cluster, an ImageBasedUpgrade CR is automatically created. You update this CR to specify the image repository of the seed image and to move through the different stages. 15.3.1. Moving to the Prep stage of the image-based upgrade with Lifecycle Agent When you deploy the Lifecycle Agent on a cluster, an ImageBasedUpgrade custom resource (CR) is automatically created. After you created all the resources that you need during the upgrade, you can move on to the Prep stage. For more information, see the "Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent" section. Note In a disconnected environment, if the seed cluster's release image registry is different from the target cluster's release image registry, you must create an ImageDigestMirrorSet (IDMS) resource to configure alternative mirrored repository locations. For more information, see "Configuring image registry repository mirroring". You can retrieve the release registry used in the seed image by running the following command: USD skopeo inspect docker://<imagename> | jq -r '.Labels."com.openshift.lifecycle-agent.seed_cluster_info" | fromjson | .release_registry' Prerequisites You have created resources to back up and restore your clusters. Procedure Check that you have patched your ImageBasedUpgrade CR: apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 1 image: <seed_container_image> 2 pullSecretRef: <seed_pull_secret> 3 autoRollbackOnFailure: {} # initMonitorTimeoutSeconds: 1800 4 extraManifests: 5 - name: example-extra-manifests-cm namespace: openshift-lifecycle-agent - name: example-catalogsources-cm namespace: openshift-lifecycle-agent oadpContent: 6 - name: oadp-cm-example namespace: openshift-adp 1 Target platform version. The value must match the version of the seed image. 2 Repository where the target cluster can pull the seed image from. 3 Reference to a secret with credentials to pull container images if the images are in a private registry. 4 Optional: Time frame in seconds to roll back if the upgrade does not complete within that time frame after the first reboot. If not defined or set to 0 , the default value of 1800 seconds (30 minutes) is used. 5 Optional: List of ConfigMap resources that contain your custom catalog sources to retain after the upgrade and your extra manifests to apply to the target cluster that are not part of the seed image. 6 List of ConfigMap resources that contain the OADP Backup and Restore CRs. To start the Prep stage, change the value of the stage field to Prep in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Prep"}}' --type=merge -n openshift-lifecycle-agent If you provide ConfigMap objects for OADP resources and extra manifests, Lifecycle Agent validates the specified ConfigMap objects during the Prep stage. You might encounter the following issues: Validation warnings or errors if the Lifecycle Agent detects any issues with the extraManifests parameters. Validation errors if the Lifecycle Agent detects any issues with the oadpContent parameters. Validation warnings do not block the Upgrade stage but you must decide if it is safe to proceed with the upgrade. These warnings, for example missing CRDs, namespaces, or dry run failures, update the status.conditions for the Prep stage and annotation fields in the ImageBasedUpgrade CR with details about the warning. Example validation warning # ... metadata: annotations: extra-manifest.lca.openshift.io/validation-warning: '...' # ... However, validation errors, such as adding MachineConfig or Operator manifests to extra manifests, cause the Prep stage to fail and block the Upgrade stage. When the validations pass, the cluster creates a new ostree stateroot, which involves pulling and unpacking the seed image, and running host-level commands. Finally, all the required images are precached on the target cluster. Verification Check the status of the ImageBasedUpgrade CR by running the following command: USD oc get ibu -o yaml Example output conditions: - lastTransitionTime: "2024-01-01T09:00:00Z" message: In progress observedGeneration: 13 reason: InProgress status: "False" type: Idle - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed observedGeneration: 13 reason: Completed status: "False" type: PrepInProgress - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep stage completed successfully observedGeneration: 13 reason: Completed status: "True" type: PrepCompleted observedGeneration: 13 validNextStages: - Idle - Upgrade Additional resources Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent Configuring image registry repository mirroring 15.3.2. Moving to the Upgrade stage of the image-based upgrade with Lifecycle Agent After you generate the seed image and complete the Prep stage, you can upgrade the target cluster. During the upgrade process, the OADP Operator creates a backup of the artifacts specified in the OADP custom resources (CRs), then the Lifecycle Agent upgrades the cluster. If the upgrade fails or stops, an automatic rollback is initiated. If you have an issue after the upgrade, you can initiate a manual rollback. For more information about manual rollback, see "Moving to the Rollback stage of the image-based upgrade with Lifecycle Agent". Prerequisites You have completed the Prep stage. Procedure To move to the Upgrade stage, change the value of the stage field to Upgrade in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Upgrade"}}' --type=merge Check the status of the ImageBasedUpgrade CR by running the following command: USD oc get ibu -o yaml Example output status: conditions: - lastTransitionTime: "2024-01-01T09:00:00Z" message: In progress observedGeneration: 5 reason: InProgress status: "False" type: Idle - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed observedGeneration: 5 reason: Completed status: "False" type: PrepInProgress - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed successfully observedGeneration: 5 reason: Completed status: "True" type: PrepCompleted - lastTransitionTime: "2024-01-01T09:00:00Z" message: |- Waiting for system to stabilize: one or more health checks failed - one or more ClusterOperators not yet ready: authentication - one or more MachineConfigPools not yet ready: master - one or more ClusterServiceVersions not yet ready: sriov-fec.v2.8.0 observedGeneration: 1 reason: InProgress status: "True" type: UpgradeInProgress observedGeneration: 1 rollbackAvailabilityExpiration: "2024-05-19T14:01:52Z" validNextStages: - Rollback The OADP Operator creates a backup of the data specified in the OADP Backup and Restore CRs and the target cluster reboots. Monitor the status of the CR by running the following command: USD oc get ibu -o yaml If you are satisfied with the upgrade, finalize the changes by patching the value of the stage field to Idle in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Idle"}}' --type=merge Important You cannot roll back the changes once you move to the Idle stage after an upgrade. The Lifecycle Agent deletes all resources created during the upgrade process. You can remove the OADP Operator and its configuration files after a successful upgrade. For more information, see "Deleting Operators from a cluster". Verification Check the status of the ImageBasedUpgrade CR by running the following command: USD oc get ibu -o yaml Example output status: conditions: - lastTransitionTime: "2024-01-01T09:00:00Z" message: In progress observedGeneration: 5 reason: InProgress status: "False" type: Idle - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed observedGeneration: 5 reason: Completed status: "False" type: PrepInProgress - lastTransitionTime: "2024-01-01T09:00:00Z" message: Prep completed successfully observedGeneration: 5 reason: Completed status: "True" type: PrepCompleted - lastTransitionTime: "2024-01-01T09:00:00Z" message: Upgrade completed observedGeneration: 1 reason: Completed status: "False" type: UpgradeInProgress - lastTransitionTime: "2024-01-01T09:00:00Z" message: Upgrade completed observedGeneration: 1 reason: Completed status: "True" type: UpgradeCompleted observedGeneration: 1 rollbackAvailabilityExpiration: "2024-01-01T09:00:00Z" validNextStages: - Idle - Rollback Check the status of the cluster restoration by running the following command: USD oc get restores -n openshift-adp -o custom-columns=NAME:.metadata.name,Status:.status.phase,Reason:.status.failureReason Example output NAME Status Reason acm-klusterlet Completed <none> 1 apache-app Completed <none> localvolume Completed <none> 1 The acm-klusterlet is specific to RHACM environments only. Additional resources Moving to the Rollback stage of the image-based upgrade with Lifecycle Agent Deleting Operators from a cluster 15.3.3. Moving to the Rollback stage of the image-based upgrade with Lifecycle Agent An automatic rollback is initiated if the upgrade does not complete within the time frame specified in the initMonitorTimeoutSeconds field after rebooting. Example ImageBasedUpgrade CR apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 image: <seed_container_image> autoRollbackOnFailure: {} # initMonitorTimeoutSeconds: 1800 1 # ... 1 Optional: The time frame in seconds to roll back if the upgrade does not complete within that time frame after the first reboot. If not defined or set to 0 , the default value of 1800 seconds (30 minutes) is used. You can manually roll back the changes if you encounter unresolvable issues after an upgrade. Prerequisites You have logged into the hub cluster as a user with cluster-admin privileges. You ensured that the control plane certificates on the original stateroot are valid. If the certificates expired, see "Recovering from expired control plane certificates". Procedure To move to the rollback stage, patch the value of the stage field to Rollback in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Rollback"}}' --type=merge The Lifecycle Agent reboots the cluster with the previously installed version of OpenShift Container Platform and restores the applications. If you are satisfied with the changes, finalize the rollback by patching the value of the stage field to Idle in the ImageBasedUpgrade CR by running the following command: USD oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "Idle"}}' --type=merge -n openshift-lifecycle-agent Warning If you move to the Idle stage after a rollback, the Lifecycle Agent cleans up resources that can be used to troubleshoot a failed upgrade. Additional resources Recovering from expired control plane certificates 15.3.4. Troubleshooting image-based upgrades with Lifecycle Agent Perform troubleshooting steps on the managed clusters that are affected by an issue. Important If you are using the ImageBasedGroupUpgrade CR to upgrade your clusters, ensure that the lcm.openshift.io/ibgu-<stage>-completed or lcm.openshift.io/ibgu-<stage>-failed cluster labels are updated properly after performing troubleshooting or recovery steps on the managed clusters. This ensures that the TALM continues to manage the image-based upgrade for the cluster. 15.3.4.1. Collecting logs You can use the oc adm must-gather CLI to collect information for debugging and troubleshooting. Procedure Collect data about the Operators by running the following command: USD oc adm must-gather \ --dest-dir=must-gather/tmp \ --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == "manager")].image}') \ --image=quay.io/konveyor/oadp-must-gather:latest \// 1 --image=quay.io/openshift/origin-must-gather:latest 2 1 Optional: Add this option if you need to gather more information from the OADP Operator. 2 Optional: Add this option if you need to gather more information from the SR-IOV Operator. 15.3.4.2. AbortFailed or FinalizeFailed error Issue During the finalize stage or when you stop the process at the Prep stage, Lifecycle Agent cleans up the following resources: Stateroot that is no longer required Precaching resources OADP CRs ImageBasedUpgrade CR If the Lifecycle Agent fails to perform the above steps, it transitions to the AbortFailed or FinalizeFailed states. The condition message and log show which steps failed. Example error message message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: "False" type: Idle Resolution Inspect the logs to determine why the failure occurred. To prompt Lifecycle Agent to retry the cleanup, add the lca.openshift.io/manual-cleanup-done annotation to the ImageBasedUpgrade CR. After observing this annotation, Lifecycle Agent retries the cleanup and, if it is successful, the ImageBasedUpgrade stage transitions to Idle . If the cleanup fails again, you can manually clean up the resources. 15.3.4.2.1. Cleaning up stateroot manually Issue Stopping at the Prep stage, Lifecycle Agent cleans up the new stateroot. When finalizing after a successful upgrade or a rollback, Lifecycle Agent cleans up the old stateroot. If this step fails, it is recommended that you inspect the logs to determine why the failure occurred. Resolution Check if there are any existing deployments in the stateroot by running the following command: USD ostree admin status If there are any, clean up the existing deployment by running the following command: USD ostree admin undeploy <index_of_deployment> After cleaning up all the deployments of the stateroot, wipe the stateroot directory by running the following commands: Warning Ensure that the booted deployment is not in this stateroot. USD stateroot="<stateroot_to_delete>" USD unshare -m /bin/sh -c "mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}" 15.3.4.2.2. Cleaning up OADP resources manually Issue Automatic cleanup of OADP resources can fail due to connection issues between Lifecycle Agent and the S3 backend. By restoring the connection and adding the lca.openshift.io/manual-cleanup-done annotation, the Lifecycle Agent can successfully cleanup backup resources. Resolution Check the backend connectivity by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true Remove all backup resources and then add the lca.openshift.io/manual-cleanup-done annotation to the ImageBasedUpgrade CR. 15.3.4.3. LVM Storage volume contents not restored When LVM Storage is used to provide dynamic persistent volume storage, LVM Storage might not restore the persistent volume contents if it is configured incorrectly. 15.3.4.3.1. Missing LVM Storage-related fields in Backup CR Issue Your Backup CRs might be missing fields that are needed to restore your persistent volumes. You can check for events in your application pod to determine if you have this issue by running the following: USD oc describe pod <your_app_name> Example output showing missing LVM Storage-related fields in Backup CR Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume "pvc-1234" : rpc error: code = Unknown desc = VolumeID is not found Resolution You must include logicalvolumes.topolvm.io in the application Backup CR. Without this resource, the application restores its persistent volume claims and persistent volume manifests correctly, however, the logicalvolume associated with this persistent volume is not restored properly after pivot. Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io 1 To restore the persistent volumes for your application, you must configure this section as shown. 15.3.4.3.2. Missing LVM Storage-related fields in Restore CR Issue The expected resources for the applications are restored but the persistent volume contents are not preserved after upgrading. List the persistent volumes for you applications by running the following command before pivot: USD oc get pv,pvc,logicalvolumes.topolvm.io -A Example output before pivot NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m List the persistent volumes for you applications by running the following command after pivot: USD oc get pv,pvc,logicalvolumes.topolvm.io -A Example output after pivot NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s Resolution The reason for this issue is that the logicalvolume status is not preserved in the Restore CR. This status is important because it is required for Velero to reference the volumes that must be preserved after pivoting. You must include the following fields in the application Restore CR: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes 1 To preserve the persistent volumes for your application, you must set restorePVs to true . 2 To preserve the persistent volumes for your application, you must configure this section as shown. 15.3.4.4. Debugging failed Backup and Restore CRs Issue The backup or restoration of artifacts failed. Resolution You can debug Backup and Restore CRs and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Describe the Backup CR that contains errors by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details Describe the Restore CR that contains errors by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details Download the backed up resources to a local directory by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz 15.4. Performing an image-based upgrade for single-node OpenShift clusters using GitOps ZTP You can use a single resource on the hub cluster, the ImageBasedGroupUpgrade custom resource (CR), to manage an imaged-based upgrade on a selected group of managed clusters through all stages. Topology Aware Lifecycle Manager (TALM) reconciles the ImageBasedGroupUpgrade CR and creates the underlying resources to complete the defined stage transitions, either in a manually controlled or a fully automated upgrade flow. For more information about the image-based upgrade, see "Understanding the image-based upgrade for single-node OpenShift clusters". Additional resources Understanding the image-based upgrade for single-node OpenShift clusters 15.4.1. Managing the image-based upgrade at scale using the ImageBasedGroupUpgrade CR on the hub The ImageBasedGroupUpgrade CR combines the ImageBasedUpgrade and ClusterGroupUpgrade APIs. For example, you can define the cluster selection and rollout strategy with the ImageBasedGroupUpgrade API in the same way as the ClusterGroupUpgrade API. The stage transitions are different from the ImageBasedUpgrade API. The ImageBasedGroupUpgrade API allows you to combine several stage transitions, also called actions, into one step that share one rollout strategy. Example ImageBasedGroupUpgrade.yaml apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: ["Prep", "Upgrade", "FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7 1 Clusters to upgrade. 2 Target platform version, the seed image to be used, and the secret required to access the image. Note If you add the seed image pull secret in the hub cluster, in the same namespace as the ImageBasedGroupUpgrade resource, the secret is added to the manifest list for the Prep stage. The secret is recreated in each spoke cluster in the openshift-lifecycle-agent namespace. 3 Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies ConfigMap objects for custom catalog sources. 4 ConfigMap resources that contain the OADP Backup and Restore CRs. 5 Upgrade plan details. 6 Number of clusters to update in a batch. 7 Timeout limit to complete the action in minutes. 15.4.1.1. Supported action combinations Actions are the list of stage transitions that TALM completes in the steps of an upgrade plan for the selected group of clusters. Each action entry in the ImageBasedGroupUpgrade CR is a separate step and a step contains one or several actions that share the same rollout strategy. You can achieve more control over the rollout strategy for each action by separating actions into steps. These actions can be combined differently in your upgrade plan and you can add subsequent steps later. Wait until the steps either complete or fail before adding a step to your plan. The first action of an added step for clusters that failed a steps must be either Abort or Rollback . Important You cannot remove actions or steps from an ongoing plan. The following table shows example plans for different levels of control over the rollout strategy: Table 15.5. Example upgrade plans Example plan Description plan: - actions: ["Prep", "Upgrade", "FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 200 timeout: 60 All actions share the same strategy plan: - actions: ["Prep", "Upgrade"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: ["FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 500 timeout: 10 Some actions share the same strategy plan: - actions: ["Prep"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: ["Upgrade"] rolloutStrategy: maxConcurrency: 200 timeout: 20 - actions: ["FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 500 timeout: 10 All actions have different strategies Important Clusters that fail one of the actions will skip the remaining actions in the same step. The ImageBasedGroupUpgrade API accepts the following actions: Prep Start preparing the upgrade resources by moving to the Prep stage. Upgrade Start the upgrade by moving to the Upgrade stage. FinalizeUpgrade Finalize the upgrade on selected clusters that completed the Upgrade action by moving to the Idle stage. Rollback Start a rollback only on successfully upgraded clusters by moving to the Rollback stage. FinalizeRollback Finalize the rollback by moving to the Idle stage. AbortOnFailure Cancel the upgrade on selected clusters that failed the Prep or Upgrade actions by moving to the Idle stage. Abort Cancel an ongoing upgrade only on clusters that are not yet upgraded by moving to the Idle stage. The following action combinations are supported. A pair of brackets signifies one step in the plan section: ["Prep"] , ["Abort"] ["Prep", "Upgrade", "FinalizeUpgrade"] ["Prep"] , ["AbortOnFailure"] , ["Upgrade"] , ["AbortOnFailure"] , ["FinalizeUpgrade"] ["Rollback", "FinalizeRollback"] Use one of the following combinations when you need to resume or cancel an ongoing upgrade from a completely new ImageBasedGroupUpgrade CR: ["Upgrade","FinalizeUpgrade"] ["FinalizeUpgrade"] ["FinalizeRollback"] ["Abort"] ["AbortOnFailure"] 15.4.1.2. Labeling for cluster selection Use the spec.clusterLabelSelectors field for initial cluster selection. In addition, TALM labels the managed clusters according to the results of their last stage transition. When a stage completes or fails, TALM marks the relevant clusters with the following labels: lcm.openshift.io/ibgu-<stage>-completed lcm.openshift.io/ibgu-<stage>-failed Use these cluster labels to cancel or roll back an upgrade on a group of clusters after troubleshooting issues that you might encounter. Important If you are using the ImageBasedGroupUpgrade CR to upgrade your clusters, ensure that the lcm.openshift.io/ibgu-<stage>-completed or lcm.openshift.io/ibgu-<stage>-failed cluster labels are updated properly after performing troubleshooting or recovery steps on the managed clusters. This ensures that the TALM continues to manage the image-based upgrade for the cluster. For example, if you want to cancel the upgrade for all managed clusters except for clusters that successfully completed the upgrade, you can add an Abort action to your plan. The Abort action moves back the ImageBasedUpgrade CR to the Idle stage, which cancels the upgrade on clusters that are not yet upgraded. Adding a separate Abort action ensures that the TALM does not perform the Abort action on clusters that have the lcm.openshift.io/ibgu-upgrade-completed label. The cluster labels are removed after successfully canceling or finalizing the upgrade. 15.4.1.3. Status monitoring The ImageBasedGroupUpgrade CR ensures a better monitoring experience with a comprehensive status reporting for all clusters that is aggregated in one place. You can monitor the following actions: status.clusters.completedActions Shows all completed actions defined in the plan section. status.clusters.currentAction Shows all actions that are currently in progress. status.clusters.failedActions Shows all failed actions along with a detailed error message. 15.4.2. Performing an image-based upgrade on managed clusters at scale in several steps For use cases when you need better control of when the upgrade interrupts your service, you can upgrade a set of your managed clusters by using the ImageBasedGroupUpgrade CR with adding actions after the step is complete. After evaluating the results of the steps, you can move to the upgrade stage or troubleshoot any failed steps throughout the procedure. Important Only certain action combinations are supported and listed in Supported action combinations . Prerequisites You have logged in to the hub cluster as a user with cluster-admin privileges. You have created policies and ConfigMap objects for resources used in the image-based upgrade. You have installed the Lifecycle Agent and OADP Operators on all managed clusters through the hub cluster. Procedure Create a YAML file on the hub cluster that contains the ImageBasedGroupUpgrade CR: apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: ["Prep"] rolloutStrategy: maxConcurrency: 2 timeout: 2400 1 Clusters to upgrade. 2 Target platform version, the seed image to be used, and the secret required to access the image. Note If you add the seed image pull secret in the hub cluster, in the same namespace as the ImageBasedGroupUpgrade resource, the secret is added to the manifest list for the Prep stage. The secret is recreated in each spoke cluster in the openshift-lifecycle-agent namespace. 3 Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies ConfigMap objects for custom catalog sources. 4 List of ConfigMap resources that contain the OADP Backup and Restore CRs. 5 Upgrade plan details. Apply the created file by running the following command on the hub cluster: USD oc apply -f <filename>.yaml Monitor the status updates by running the following command on the hub cluster: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Prep name: spoke1 - completedActions: - action: Prep name: spoke4 - failedActions: - action: Prep name: spoke6 # ... The output of an example plan starts with the Prep stage only and you add actions to the plan based on the results of the step. TALM adds a label to the clusters to mark if the upgrade succeeded or failed. For example, the lcm.openshift.io/ibgu-prep-failed is applied to clusters that failed the Prep stage. After investigating the failure, you can add the AbortOnFailure step to your upgrade plan. It moves the clusters labeled with lcm.openshift.io/ibgu-<action>-failed back to the Idle stage. Any resources that are related to the upgrade on the selected clusters are deleted. Optional: Add the AbortOnFailure action to your existing ImageBasedGroupUpgrade CR by running the following command: USD oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["AbortOnFailure"], "rolloutStrategy": {"maxConcurrency": 5, "timeout": 10}}}]' Continue monitoring the status updates by running the following command: USD oc get ibgu -o yaml Add the action to your existing ImageBasedGroupUpgrade CR by running the following command: USD oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["Upgrade"], "rolloutStrategy": {"maxConcurrency": 2, "timeout": 30}}}]' Optional: Add the AbortOnFailure action to your existing ImageBasedGroupUpgrade CR by running the following command: USD oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["AbortOnFailure"], "rolloutStrategy": {"maxConcurrency": 5, "timeout": 10}}}]' Continue monitoring the status updates by running the following command: USD oc get ibgu -o yaml Add the action to your existing ImageBasedGroupUpgrade CR by running the following command: USD oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["FinalizeUpgrade"], "rolloutStrategy": {"maxConcurrency": 10, "timeout": 3}}}]' Verification Monitor the status updates by running the following command: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Prep - action: AbortOnFailure failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - completedActions: - action: AbortOnFailure failedActions: - action: Prep name: spoke6 # ... Additional resources Configuring a shared container partition between ostree stateroots when using GitOps ZTP Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent using GitOps ZTP About backup and snapshot locations and their secrets Creating a Backup CR Creating a Restore CR Supported action combinations 15.4.3. Performing an image-based upgrade on managed clusters at scale in one step For use cases when service interruption is not a concern, you can upgrade a set of your managed clusters by using the ImageBasedGroupUpgrade CR with several actions combined in one step with one rollout strategy. With one rollout strategy, the upgrade time can be reduced but you can only troubleshoot failed clusters after the upgrade plan is complete. Prerequisites You have logged in to the hub cluster as a user with cluster-admin privileges. You have created policies and ConfigMap objects for resources used in the image-based upgrade. You have installed the Lifecycle Agent and OADP Operators on all managed clusters through the hub cluster. Procedure Create a YAML file on the hub cluster that contains the ImageBasedGroupUpgrade CR: apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: ["Prep", "Upgrade", "FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7 1 Clusters to upgrade. 2 Target platform version, the seed image to be used, and the secret required to access the image. Note If you add the seed image pull secret in the hub cluster, in the same namespace as the ImageBasedGroupUpgrade resource, the secret is added to the manifest list for the Prep stage. The secret is recreated in each spoke cluster in the openshift-lifecycle-agent namespace. 3 Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies ConfigMap objects for custom catalog sources. 4 ConfigMap resources that contain the OADP Backup and Restore CRs. 5 Upgrade plan details. 6 Number of clusters to update in a batch. 7 Timeout limit to complete the action in minutes. Apply the created file by running the following command on the hub cluster: USD oc apply -f <filename>.yaml Verification Monitor the status updates by running the following command: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Prep failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - failedActions: - action: Prep name: spoke6 # ... 15.4.4. Canceling an image-based upgrade on managed clusters at scale You can cancel the upgrade on a set of managed clusters that completed the Prep stage. Important Only certain action combinations are supported and listed in Supported action combinations . Prerequisites You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create a separate YAML file on the hub cluster that contains the ImageBasedGroupUpgrade CR: apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: ["Abort"] rolloutStrategy: maxConcurrency: 5 timeout: 10 All managed clusters that completed the Prep stage are moved back to the Idle stage. Apply the created file by running the following command on the hub cluster: USD oc apply -f <filename>.yaml Verification Monitor the status updates by running the following command: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Prep currentActions: - action: Abort name: spoke4 # ... Additional resources Supported action combinations 15.4.5. Rolling back an image-based upgrade on managed clusters at scale Roll back the changes on a set of managed clusters if you encounter unresolvable issues after a successful upgrade. You need to create a separate ImageBasedGroupUpgrade CR and define the set of managed clusters that you want to roll back. Important Only certain action combinations are supported and listed in Supported action combinations . Prerequisites You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create a separate YAML file on the hub cluster that contains the ImageBasedGroupUpgrade CR: apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: ["Rollback", "FinalizeRollback"] rolloutStrategy: maxConcurrency: 200 timeout: 2400 Apply the created file by running the following command on the hub cluster: USD oc apply -f <filename>.yaml All managed clusters that match the defined labels are moved back to the Rollback and then the Idle stages to finalize the rollback. Verification Monitor the status updates by running the following command: USD oc get ibgu -o yaml Example output # ... status: clusters: - completedActions: - action: Rollback - action: FinalizeRollback name: spoke4 # ... Additional resources Supported action combinations Recovering from expired control plane certificates 15.4.6. Troubleshooting image-based upgrades with Lifecycle Agent Perform troubleshooting steps on the managed clusters that are affected by an issue. Important If you are using the ImageBasedGroupUpgrade CR to upgrade your clusters, ensure that the lcm.openshift.io/ibgu-<stage>-completed or lcm.openshift.io/ibgu-<stage>-failed cluster labels are updated properly after performing troubleshooting or recovery steps on the managed clusters. This ensures that the TALM continues to manage the image-based upgrade for the cluster. 15.4.6.1. Collecting logs You can use the oc adm must-gather CLI to collect information for debugging and troubleshooting. Procedure Collect data about the Operators by running the following command: USD oc adm must-gather \ --dest-dir=must-gather/tmp \ --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == "manager")].image}') \ --image=quay.io/konveyor/oadp-must-gather:latest \// 1 --image=quay.io/openshift/origin-must-gather:latest 2 1 Optional: Add this option if you need to gather more information from the OADP Operator. 2 Optional: Add this option if you need to gather more information from the SR-IOV Operator. 15.4.6.2. AbortFailed or FinalizeFailed error Issue During the finalize stage or when you stop the process at the Prep stage, Lifecycle Agent cleans up the following resources: Stateroot that is no longer required Precaching resources OADP CRs ImageBasedUpgrade CR If the Lifecycle Agent fails to perform the above steps, it transitions to the AbortFailed or FinalizeFailed states. The condition message and log show which steps failed. Example error message message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: "False" type: Idle Resolution Inspect the logs to determine why the failure occurred. To prompt Lifecycle Agent to retry the cleanup, add the lca.openshift.io/manual-cleanup-done annotation to the ImageBasedUpgrade CR. After observing this annotation, Lifecycle Agent retries the cleanup and, if it is successful, the ImageBasedUpgrade stage transitions to Idle . If the cleanup fails again, you can manually clean up the resources. 15.4.6.2.1. Cleaning up stateroot manually Issue Stopping at the Prep stage, Lifecycle Agent cleans up the new stateroot. When finalizing after a successful upgrade or a rollback, Lifecycle Agent cleans up the old stateroot. If this step fails, it is recommended that you inspect the logs to determine why the failure occurred. Resolution Check if there are any existing deployments in the stateroot by running the following command: USD ostree admin status If there are any, clean up the existing deployment by running the following command: USD ostree admin undeploy <index_of_deployment> After cleaning up all the deployments of the stateroot, wipe the stateroot directory by running the following commands: Warning Ensure that the booted deployment is not in this stateroot. USD stateroot="<stateroot_to_delete>" USD unshare -m /bin/sh -c "mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}" 15.4.6.2.2. Cleaning up OADP resources manually Issue Automatic cleanup of OADP resources can fail due to connection issues between Lifecycle Agent and the S3 backend. By restoring the connection and adding the lca.openshift.io/manual-cleanup-done annotation, the Lifecycle Agent can successfully cleanup backup resources. Resolution Check the backend connectivity by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true Remove all backup resources and then add the lca.openshift.io/manual-cleanup-done annotation to the ImageBasedUpgrade CR. 15.4.6.3. LVM Storage volume contents not restored When LVM Storage is used to provide dynamic persistent volume storage, LVM Storage might not restore the persistent volume contents if it is configured incorrectly. 15.4.6.3.1. Missing LVM Storage-related fields in Backup CR Issue Your Backup CRs might be missing fields that are needed to restore your persistent volumes. You can check for events in your application pod to determine if you have this issue by running the following: USD oc describe pod <your_app_name> Example output showing missing LVM Storage-related fields in Backup CR Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume "pvc-1234" : rpc error: code = Unknown desc = VolumeID is not found Resolution You must include logicalvolumes.topolvm.io in the application Backup CR. Without this resource, the application restores its persistent volume claims and persistent volume manifests correctly, however, the logicalvolume associated with this persistent volume is not restored properly after pivot. Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io 1 To restore the persistent volumes for your application, you must configure this section as shown. 15.4.6.3.2. Missing LVM Storage-related fields in Restore CR Issue The expected resources for the applications are restored but the persistent volume contents are not preserved after upgrading. List the persistent volumes for you applications by running the following command before pivot: USD oc get pv,pvc,logicalvolumes.topolvm.io -A Example output before pivot NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m List the persistent volumes for you applications by running the following command after pivot: USD oc get pv,pvc,logicalvolumes.topolvm.io -A Example output after pivot NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s Resolution The reason for this issue is that the logicalvolume status is not preserved in the Restore CR. This status is important because it is required for Velero to reference the volumes that must be preserved after pivoting. You must include the following fields in the application Restore CR: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes 1 To preserve the persistent volumes for your application, you must set restorePVs to true . 2 To preserve the persistent volumes for your application, you must configure this section as shown. 15.4.6.4. Debugging failed Backup and Restore CRs Issue The backup or restoration of artifacts failed. Resolution You can debug Backup and Restore CRs and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Describe the Backup CR that contains errors by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details Describe the Restore CR that contains errors by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details Download the backed up resources to a local directory by running the following command: USD oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz | [
"apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage spec: seedImage: <seed_image>",
"apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle 1 seedImageRef: 2 version: <target_version> image: <seed_container_image> pullSecretRef: name: <seed_pull_secret> autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 3 extraManifests: 4 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 5 - name: oadp-cm-example namespace: openshift-adp",
"apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet namespace: openshift-adp annotations: lca.openshift.io/apply-label: rbac.authorization.k8s.io/v1/clusterroles/klusterlet,apps/v1/deployments/open-cluster-management-agent/klusterlet 1 labels: velero.io/storage-location: default spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - clusterroles includedNamespaceScopedResources: - deployments",
"apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: annotations: lca.openshift.io/target-ocp-version-manifest-count: \"5\" name: upgrade",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-containers-partitioned spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 partitions: - label: var-lib-containers startMiB: <start_of_partition> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var-lib-containers format: xfs mountOptions: - defaults - prjquota path: /var/lib/containers wipeFilesystem: true systemd: units: - contents: |- # Generated by Butane [Unit] Before=local-fs.target Requires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service After=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service [Mount] Where=/var/lib/containers What=/dev/disk/by-partlabel/var-lib-containers Type=xfs Options=defaults,prjquota [Install] RequiredBy=local-fs.target enabled: true name: var-lib-containers.mount",
"variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota",
"butane storage.bu",
"{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}",
"[...] spec: clusters: - nodes: - hostName: <name> ignitionConfigOverride: '{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}' [...]",
"oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]'",
"\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers",
"df -h",
"Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000",
"apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management",
"oc create -f <namespace_filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-lifecycle-agent namespace: openshift-lifecycle-agent spec: targetNamespaces: - openshift-lifecycle-agent",
"oc create -f <operatorgroup_filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-lifecycle-agent-subscription namespace: openshift-lifecycle-agent spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <subscription_filename>.yaml",
"oc get csv -n openshift-lifecycle-agent",
"NAME DISPLAY VERSION REPLACES PHASE lifecycle-agent.v4.17.0 Openshift Lifecycle Agent 4.17.0 Succeeded",
"oc get deploy -n openshift-lifecycle-agent",
"NAME READY UP-TO-DATE AVAILABLE AGE lifecycle-agent-controller-manager 1/1 1 1 14s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management ran.openshift.io/ztp-deploy-wave: \"2\" labels: kubernetes.io/metadata.name: openshift-lifecycle-agent",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent-operatorgroup namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: targetNamespaces: - openshift-lifecycle-agent",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── LcaSubscriptionNS.yaml │ ├── LcaSubscriptionOperGroup.yaml │ ├── LcaSubscription.yaml",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-common-latest\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" du-profile: \"latest\" sourceFiles: - fileName: LcaSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: LcaSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: LcaSubscription.yaml policyName: \"subscriptions-policy\" [...]",
"apiVersion: v1 kind: Namespace metadata: name: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" labels: kubernetes.io/metadata.name: openshift-adp",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: targetNamespaces: - openshift-adp",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: stable-1.4 name: redhat-oadp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: redhat-oadp-operator.openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" status: components: refs: - kind: Subscription namespace: openshift-adp conditions: - type: CatalogSourcesUnhealthy status: \"False\" - kind: InstallPlan namespace: openshift-adp conditions: - type: Installed status: \"True\" - kind: ClusterServiceVersion namespace: openshift-adp conditions: - type: Succeeded status: \"True\" reason: InstallSucceeded",
"├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── OadpSubscriptionNS.yaml │ ├── OadpSubscriptionOperGroup.yaml │ ├── OadpSubscription.yaml │ ├── OadpOperatorStatus.yaml",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-common-latest\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" du-profile: \"latest\" sourceFiles: - fileName: OadpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: OadpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: OadpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: OadpOperatorStatus.yaml policyName: \"subscriptions-policy\" [...]",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dataprotectionapplication namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: configuration: restic: enable: false 1 velero: defaultPlugins: - aws - openshift resourceTimeout: 10m backupLocations: - velero: config: profile: \"default\" region: minio s3Url: USDurl insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: USDbucketName 2 prefix: USDprefixName 3 status: conditions: - reason: Complete status: \"True\" type: Reconciled",
"apiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" type: Opaque",
"apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" status: phase: Available",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-cnf\" namespace: \"ztp-site\" spec: bindingRules: sites: \"example-cnf\" du-profile: \"latest\" mcp: \"master\" sourceFiles: - fileName: OadpSecret.yaml policyName: \"config-policy\" data: cloud: <your_credentials> 1 - fileName: DataProtectionApplication.yaml policyName: \"config-policy\" spec: backupLocations: - velero: config: region: minio s3Url: <your_S3_URL> 2 profile: \"default\" insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <your_bucket_name> 3 prefix: <cluster_name> 4 - fileName: OadpBackupStorageLocationStatus.yaml policyName: \"config-policy\"",
"oc delete managedcluster sno-worker-example",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: #- example-seed-sno1.yaml - example-target-sno2.yaml - example-target-sno3.yaml",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: {}",
"MY_USER=myuserid AUTHFILE=/tmp/my-auth.json podman login --authfile USD{AUTHFILE} -u USD{MY_USER} quay.io/USD{MY_USER}",
"base64 -w 0 USD{AUTHFILE} ; echo",
"apiVersion: v1 kind: Secret metadata: name: seedgen 1 namespace: openshift-lifecycle-agent type: Opaque data: seedAuth: <encoded_AUTHFILE> 2",
"oc apply -f secretseedgenerator.yaml",
"apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage 1 spec: seedImage: <seed_container_image> 2",
"oc apply -f seedgenerator.yaml",
"oc get seedgenerator -o yaml",
"status: conditions: - lastTransitionTime: \"2024-02-13T21:24:26Z\" message: Seed Generation completed observedGeneration: 1 reason: Completed status: \"False\" type: SeedGenInProgress - lastTransitionTime: \"2024-02-13T21:24:26Z\" message: Seed Generation completed observedGeneration: 1 reason: Completed status: \"True\" type: SeedGenCompleted 1 observedGeneration: 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: \"apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials\" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"1\" spec: backupName: acm-klusterlet",
"apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"2\" 1 spec: backupName: lvmcluster",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: \"apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test\" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" 2 spec: backupName: backup-app-cluster-resources",
"apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app",
"apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5",
"oc create configmap oadp-cm-example --from-file=example-oadp-resources.yaml=<path_to_oadp_crs> -n openshift-adp",
"oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"oadpContent\": [{\"name\": \"oadp-cm-example\", \"namespace\": \"openshift-adp\"}]}}' --type=merge -n openshift-lifecycle-agent",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: \"example-sriov-node-policy\" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: \"\" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"example-sriov-network\" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: \"on\" trust: \"off\"",
"oc create configmap example-extra-manifests-cm --from-file=example-extra-manifests.yaml=<path_to_extramanifest> -n openshift-lifecycle-agent",
"oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"extraManifests\": [{\"name\": \"example-extra-manifests-cm\", \"namespace\": \"openshift-lifecycle-agent\"}]}}' --type=merge -n openshift-lifecycle-agent",
"apiVersion: operators.coreos.com/v1 kind: CatalogSource metadata: name: example-catalogsources namespace: openshift-marketplace spec: sourceType: grpc displayName: disconnected-redhat-operators image: quay.io/example-org/example-catalog:v1",
"oc create configmap example-catalogsources-cm --from-file=example-catalogsources.yaml=<path_to_catalogsource_cr> -n openshift-lifecycle-agent",
"oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"extraManifests\": [{\"name\": \"example-catalogsources-cm\", \"namespace\": \"openshift-lifecycle-agent\"}]}}' --type=merge -n openshift-lifecycle-agent",
"├── source-crs/ │ ├── ibu/ │ │ ├── ImageBasedUpgrade.yaml │ │ ├── PlatformBackupRestore.yaml │ │ ├── PlatformBackupRestoreLvms.yaml │ │ ├── PlatformBackupRestoreWithIBGU.yaml ├── ├── kustomization.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: \"apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-work:ibu-role,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials\" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"1\" spec: backupName: acm-klusterlet",
"apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"2\" 1 spec: backupName: lvmcluster",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: \"apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test\" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" 2 spec: backupName: backup-app-cluster-resources",
"apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app",
"apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: 1 - files: - source-crs/ibu/PlatformBackupRestoreWithIBGU.yaml #- source-crs/custom-crs/ApplicationClusterScopedBackupRestore.yaml #- source-crs/custom-crs/ApplicationApplicationBackupRestoreLso.yaml name: oadp-cm namespace: openshift-adp 2 generatorOptions: disableNameSuffixHash: true",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: example-sno spec: bindingRules: sites: \"example-sno\" du-profile: \"4.15\" mcp: \"master\" sourceFiles: - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-fh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" 1 spec: resourceName: du_fh vlan: 140 - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-fh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: deviceType: netdevice isRdma: false nicSelector: pfNames: [\"ens5f0\"] numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: resourceName: du_mh vlan: 150 - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-mh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [\"ens7f0\"] numVfs: 8 priority: 10 resourceName: du_mh - fileName: DefaultCatsrc.yaml 2 policyName: \"config-policy\" metadata: name: default-cat-source namespace: openshift-marketplace labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: displayName: default-cat-source image: quay.io/example-org/example-catalog:v1",
"oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent='65'",
"oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent-",
"oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep='Disabled'",
"oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep-",
"skopeo inspect docker://<imagename> | jq -r '.Labels.\"com.openshift.lifecycle-agent.seed_cluster_info\" | fromjson | .release_registry'",
"apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 1 image: <seed_container_image> 2 pullSecretRef: <seed_pull_secret> 3 autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 4 extraManifests: 5 - name: example-extra-manifests-cm namespace: openshift-lifecycle-agent - name: example-catalogsources-cm namespace: openshift-lifecycle-agent oadpContent: 6 - name: oadp-cm-example namespace: openshift-adp",
"oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Prep\"}}' --type=merge -n openshift-lifecycle-agent",
"metadata: annotations: extra-manifest.lca.openshift.io/validation-warning: '...'",
"oc get ibu -o yaml",
"conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 13 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 13 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep stage completed successfully observedGeneration: 13 reason: Completed status: \"True\" type: PrepCompleted observedGeneration: 13 validNextStages: - Idle - Upgrade",
"oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Upgrade\"}}' --type=merge",
"oc get ibu -o yaml",
"status: conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 5 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 5 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed successfully observedGeneration: 5 reason: Completed status: \"True\" type: PrepCompleted - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: |- Waiting for system to stabilize: one or more health checks failed - one or more ClusterOperators not yet ready: authentication - one or more MachineConfigPools not yet ready: master - one or more ClusterServiceVersions not yet ready: sriov-fec.v2.8.0 observedGeneration: 1 reason: InProgress status: \"True\" type: UpgradeInProgress observedGeneration: 1 rollbackAvailabilityExpiration: \"2024-05-19T14:01:52Z\" validNextStages: - Rollback",
"oc get ibu -o yaml",
"oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Idle\"}}' --type=merge",
"oc get ibu -o yaml",
"status: conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 5 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 5 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed successfully observedGeneration: 5 reason: Completed status: \"True\" type: PrepCompleted - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Upgrade completed observedGeneration: 1 reason: Completed status: \"False\" type: UpgradeInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Upgrade completed observedGeneration: 1 reason: Completed status: \"True\" type: UpgradeCompleted observedGeneration: 1 rollbackAvailabilityExpiration: \"2024-01-01T09:00:00Z\" validNextStages: - Idle - Rollback",
"oc get restores -n openshift-adp -o custom-columns=NAME:.metadata.name,Status:.status.phase,Reason:.status.failureReason",
"NAME Status Reason acm-klusterlet Completed <none> 1 apache-app Completed <none> localvolume Completed <none>",
"apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 image: <seed_container_image> autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 1",
"oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Rollback\"}}' --type=merge",
"oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Idle\"}}' --type=merge -n openshift-lifecycle-agent",
"oc adm must-gather --dest-dir=must-gather/tmp --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == \"manager\")].image}') --image=quay.io/konveyor/oadp-must-gather:latest \\// 1 --image=quay.io/openshift/origin-must-gather:latest 2",
"message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: \"False\" type: Idle",
"ostree admin status",
"ostree admin undeploy <index_of_deployment>",
"stateroot=\"<stateroot_to_delete>\"",
"unshare -m /bin/sh -c \"mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true",
"oc describe pod <your_app_name>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume \"pvc-1234\" : rpc error: code = Unknown desc = VolumeID is not found",
"apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io",
"oc get pv,pvc,logicalvolumes.topolvm.io -A",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m",
"oc get pv,pvc,logicalvolumes.topolvm.io -A",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s",
"apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes",
"oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details",
"oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details",
"oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz",
"apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7",
"plan: - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 60",
"plan: - actions: [\"Prep\", \"Upgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: [\"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 500 timeout: 10",
"plan: - actions: [\"Prep\"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: [\"Upgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 20 - actions: [\"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 500 timeout: 10",
"apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\"] rolloutStrategy: maxConcurrency: 2 timeout: 2400",
"oc apply -f <filename>.yaml",
"oc get ibgu -o yaml",
"status: clusters: - completedActions: - action: Prep name: spoke1 - completedActions: - action: Prep name: spoke4 - failedActions: - action: Prep name: spoke6",
"oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"AbortOnFailure\"], \"rolloutStrategy\": {\"maxConcurrency\": 5, \"timeout\": 10}}}]'",
"oc get ibgu -o yaml",
"oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"Upgrade\"], \"rolloutStrategy\": {\"maxConcurrency\": 2, \"timeout\": 30}}}]'",
"oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"AbortOnFailure\"], \"rolloutStrategy\": {\"maxConcurrency\": 5, \"timeout\": 10}}}]'",
"oc get ibgu -o yaml",
"oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"FinalizeUpgrade\"], \"rolloutStrategy\": {\"maxConcurrency\": 10, \"timeout\": 3}}}]'",
"oc get ibgu -o yaml",
"status: clusters: - completedActions: - action: Prep - action: AbortOnFailure failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - completedActions: - action: AbortOnFailure failedActions: - action: Prep name: spoke6",
"apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7",
"oc apply -f <filename>.yaml",
"oc get ibgu -o yaml",
"status: clusters: - completedActions: - action: Prep failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - failedActions: - action: Prep name: spoke6",
"apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: [\"Abort\"] rolloutStrategy: maxConcurrency: 5 timeout: 10",
"oc apply -f <filename>.yaml",
"oc get ibgu -o yaml",
"status: clusters: - completedActions: - action: Prep currentActions: - action: Abort name: spoke4",
"apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: [\"Rollback\", \"FinalizeRollback\"] rolloutStrategy: maxConcurrency: 200 timeout: 2400",
"oc apply -f <filename>.yaml",
"oc get ibgu -o yaml",
"status: clusters: - completedActions: - action: Rollback - action: FinalizeRollback name: spoke4",
"oc adm must-gather --dest-dir=must-gather/tmp --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == \"manager\")].image}') --image=quay.io/konveyor/oadp-must-gather:latest \\// 1 --image=quay.io/openshift/origin-must-gather:latest 2",
"message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: \"False\" type: Idle",
"ostree admin status",
"ostree admin undeploy <index_of_deployment>",
"stateroot=\"<stateroot_to_delete>\"",
"unshare -m /bin/sh -c \"mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true",
"oc describe pod <your_app_name>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume \"pvc-1234\" : rpc error: code = Unknown desc = VolumeID is not found",
"apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io",
"oc get pv,pvc,logicalvolumes.topolvm.io -A",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m",
"oc get pv,pvc,logicalvolumes.topolvm.io -A",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s",
"apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes",
"oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details",
"oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details",
"oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/edge_computing/image-based-upgrade-for-single-node-openshift-clusters |
1.9.2. Cluster Administration GUI | 1.9.2. Cluster Administration GUI This section provides an overview of the system-config-cluster cluster administration graphical user interface (GUI) available with Red Hat Cluster Suite. The GUI is for use with the cluster infrastructure and the high-availability service management components (refer to Section 1.3, "Cluster Infrastructure" and Section 1.4, "High-availability Service Management" ). The GUI consists of two major functions: the Cluster Configuration Tool and the Cluster Status Tool . The Cluster Configuration Tool provides the capability to create, edit, and propagate the cluster configuration file ( /etc/cluster/cluster.conf ). The Cluster Status Tool provides the capability to manage high-availability services. The following sections summarize those functions. Section 1.9.2.1, " Cluster Configuration Tool " Section 1.9.2.2, " Cluster Status Tool " 1.9.2.1. Cluster Configuration Tool You can access the Cluster Configuration Tool ( Figure 1.28, " Cluster Configuration Tool " ) through the Cluster Configuration tab in the Cluster Administration GUI. Figure 1.28. Cluster Configuration Tool The Cluster Configuration Tool represents cluster configuration components in the configuration file ( /etc/cluster/cluster.conf ) with a hierarchical graphical display in the left panel. A triangle icon to the left of a component name indicates that the component has one or more subordinate components assigned to it. Clicking the triangle icon expands and collapses the portion of the tree below a component. The components displayed in the GUI are summarized as follows: Cluster Nodes - Displays cluster nodes. Nodes are represented by name as subordinate elements under Cluster Nodes . Using configuration buttons at the bottom of the right frame (below Properties ), you can add nodes, delete nodes, edit node properties, and configure fencing methods for each node. Fence Devices - Displays fence devices. Fence devices are represented as subordinate elements under Fence Devices . Using configuration buttons at the bottom of the right frame (below Properties ), you can add fence devices, delete fence devices, and edit fence-device properties. Fence devices must be defined before you can configure fencing (with the Manage Fencing For This Node button) for each node. Managed Resources - Displays failover domains, resources, and services. Failover Domains - For configuring one or more subsets of cluster nodes used to run a high-availability service in the event of a node failure. Failover domains are represented as subordinate elements under Failover Domains . Using configuration buttons at the bottom of the right frame (below Properties ), you can create failover domains (when Failover Domains is selected) or edit failover domain properties (when a failover domain is selected). Resources - For configuring shared resources to be used by high-availability services. Shared resources consist of file systems, IP addresses, NFS mounts and exports, and user-created scripts that are available to any high-availability service in the cluster. Resources are represented as subordinate elements under Resources . Using configuration buttons at the bottom of the right frame (below Properties ), you can create resources (when Resources is selected) or edit resource properties (when a resource is selected). Note The Cluster Configuration Tool provides the capability to configure private resources, also. A private resource is a resource that is configured for use with only one service. You can configure a private resource within a Service component in the GUI. Services - For creating and configuring high-availability services. A service is configured by assigning resources (shared or private), assigning a failover domain, and defining a recovery policy for the service. Services are represented as subordinate elements under Services . Using configuration buttons at the bottom of the right frame (below Properties ), you can create services (when Services is selected) or edit service properties (when a service is selected). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-clumgmttools-overview-cso |
9.4. Changing Passwords | 9.4. Changing Passwords Password policies ( Chapter 19, Policy: Defining Password Policies ) and minimal access restrictions can be applied to a password change operation: Regular, non-administrative users can change only their personal passwords, and all passwords are constrained by the IdM password policies. This allows administrators to create intro passwords or to reset passwords easily, while still keeping the final password confidential. Since any password sent by an administrator to the user is temporary, there is little security risk. Changing a password as the IdM admin user overrides any IdM password policies, but the password expires immediately. This requires the user to change the password at the login. Similarly, any user who has password change rights can change a password and no password policies are applied, but the other user must reset the password at the login. Changing a password as the LDAP Directory Manager user, using LDAP tools , overrides any IdM password policies. 9.4.1. From the Web UI Open the Identity tab, and select the Users subtab. Click the name of the user for whom to reset the password. All users can change their own password; only administrators or users with delegated permissions can change other user's passwords. Scroll to the Account Settings area. Click the Reset Password link. In the pop-up box, enter and confirm the new password. 9.4.2. From the Command Line Changing a password - your own or another user's - is done using the user-mod command, as with other user account changes. | [
"[bjensen@ipaserver ~]USD kinit admin [bjensen@ipaserver ~]USD ipa user-mod jsmith --password"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/changing-pwds |
Chapter 9. Creating VMDK images for RHEL for Edge | Chapter 9. Creating VMDK images for RHEL for Edge You can create a .vmdk image for RHEL for Edge by using the RHEL image builder. You can create an edge-vsphere image type with Ignition support, to inject the user configuration into the image at an early stage of the boot process. Then, you can load the image on vSphere and boot the image in a vSphere VM. The image is compatible with ESXi 7.0 U2, ESXi 8.0 and later. The vSphere VM is compatible with version 19 and 20. 9.1. Creating a blueprint with the Ignition configuration Create a blueprint for the .vmdk image and customize it with the customizations.ignition section. With that, you can create your image and, at boot time, the operating system will inject the user configuration to the image. Prerequisites You have created an Ignition configuration file. For example: Procedure Create a blueprint in the Tom's Obvious, Minimal Language (TOML) format, with the following content: Where: The name is the name and description is the description for your blueprint. The version is the version number according to the Semantic Versioning scheme. The modules and packages describe the package name and matching version glob to be installed into the image. For example, the package name = "open-vm-tools" . Notice that currently there are no differences between packages and modules. The groups are packages groups to be installed into the image. For example groups = "anaconda-tools" group package. If you do not know the modules and groups, leave them empty. The customizations.user creates a username and password to log in to the VM. The customizations.ignition.firstboot contains the URL where the Ignition configuration file is being served. Note By default, the open-vm-tools package is not included in the edge-vsphere image. If you need this package, you must include it in the blueprint customization. Import the blueprint to the image builder server: List the existing blueprints to check whether the created blueprint is successfully pushed and exists: Check whether the components and versions listed in the blueprint and their dependencies are valid: steps Use the blueprint you created to build your .vmdk image. 9.2. Creating a VMDK image for RHEL for Edge To create a RHEL for Edge .vmdk image, use the 'edge-vsphere' image type in the RHEL image builder command-line interface. Prerequisites You created a blueprint for the .vmdk image. You served an OSTree repository of the commit to embed it in the image. For example, http://10.0.2.2:8080/repo . For more details, see Setting up a web server to install RHEL for Edge image . Procedure Start the compose of a .vmdk image: The -- <url> is the URL of your repo, for example: http://10.88.0.1:8080/repo . A confirmation that the composer process has been added to the queue appears. It also shows a Universally Unique Identifier (UUID) number for the image created. Use the UUID number to track your build. Also, keep the UUID number handy for further tasks. Check the image compose status: The output displays the status in the following format: After the compose process finishes, download the resulting image file: steps Upload the .vmdk image to vSphere. 9.3. Uploading VMDK images and creating a RHEL virtual machine in vSphere Upload the .vmdk image to VMware vSphere by using the govc import.vmdk CLI tool and boot the image in a VM. Prerequisites You created an .vmdk image by using RHEL image builder and downloaded it to your host system. You installed the govc import.vmdk CLI tool. You configured the govc import.vmdk CLI tool client. You must set the following values in the environment: Procedure Navigate to the directory where you downloaded your .vmdk image. Launch the image on vSphere by executing the following steps: Import the .vmdk image in to vSphere: Create the VM in vSphere without powering it on: Power-on the VM: Retrieve the VM IP address: Use SSH to log in to the VM, using the username and password you specified in your blueprint: | [
"{ \"ignition\":{ \"version\":\"3.3.0\" }, \"passwd\":{ \"users\":[ { \"groups\":[ \"wheel\" ], \"name\":\"core\", \"passwordHash\":\"USD6USDjfuNnO9t1Bv7N\" } ] } }",
"name = \"vmdk-image\" description = \"Blueprint with Ignition for the vmdk image\" version = \"0.0.1\" packages = [\"open-vm-tools\"] modules = [] groups = [] distro = \"\" [[customizations.user]] name = \"admin\" password = \"admin\" groups = [\"wheel\"] [customizations.ignition.firstboot] url = http:// <IP_address> :8080/ config.ig",
"composer-cli blueprints push <blueprint-name> .toml",
"composer-cli blueprints show <blueprint-name>",
"composer-cli blueprints depsolve <blueprint-name>",
"composer-cli compose start start-ostree <blueprint-name> edge-vsphere -- <url>",
"composer-cli compose status",
"<UUID> RUNNING date <blueprint-name> <blueprint-version> edge-vsphere",
"composer-cli compose image <UUID>",
"GOVC_URL GOVC_DATACENTER GOVC_FOLDER GOVC_DATASTORE GOVC_RESOURCE_POOL GOVC_NETWORK",
"govc import.vmdk ./composer-api.vmdk foldername",
"govc vm.create -net=\"VM Network\" -net.adapter=vmxnet3 -disk.controller=pvscsi -on=false -m=4096 -c=2 -g=rhel9_64Guest -firmware=efi vm_name govc vm.disk.attach -disk=\" foldername /composer-api.vmdk\" govc vm.power -on -vm vm_name -link=false vm_name",
"govc vm.power -on vmname",
"HOST=USD(govc vm.ip vmname )",
"ssh admin@HOST"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/creating-vmdk-images-for-rhel-for-edge_composing-installing-managing-rhel-for-edge-images |
Chapter 4. Converting virtual machines to run on Red Hat Enterprise Virtualization | Chapter 4. Converting virtual machines to run on Red Hat Enterprise Virtualization Warning The Red Hat Enterprise Linux 6 version of the virt-v2v utility has been deprecated. Users of Red Hat Enterprise Linux 6 are advised to create a Red Hat Enterprise 7 virtual machine, and install virt-v2v in that virtual machine. The Red Hat Enterprise Linux 7 version is fully supported and documented in virt-v2v Knowledgebase articles . virt-v2v can convert virtual machines to run on Red Hat Enterprise Virtualization. Virtual machines can be converted from Xen, KVM and VMware ESX / ESX(i) environments. Before converting virtual machines to run on Red Hat Enterprise Virtualization, you must attach an export storage domain to the Red Hat Enterprise Virtualization data center being used. Section 4.2, "Attaching an export storage domain" explains the process of attaching an export storage domain. For more information on export storage domains, see the Red Hat Enterprise Virtualization Administration Guide . 4.1. Acceptable converted storage output formats It is important to note that when converting a guest virtual machine to run on Red Hat Enterprise Virtualization, not all combinations of storage format and allocation policy are supported. The supported combinations differ according to whether the Red Hat Enterprise Virtualization data center the guest will be imported into uses block (FC or iSCSI) or file (NFS) for its data storage domain. Note that virt-v2v writes to an export storage domain, and this is always required to be NFS. Note The important element for a successful virtual machine import into Red Hat Enterprise Virtualization is the type of the data domain. virt-v2v is unable to detect the data center type, so this check must be applied manually by the user. Table 4.1. Allocation Policy: Preallocated Data Domain Type Storage Format Supported NFS raw Yes qcow2 No FC/iSCSI raw Yes qcow2 No Table 4.2. Allocation Policy: Sparse Data Domain Type Storage Format Supported NFS raw Yes qcow2 Yes FC/iSCSI raw No qcow2 Yes Data format and allocation policy of the virtual machine being converted by virt-v2v will be preserved unless the output data format and allocation policy are specified using the -of and -oa parameters respectively. To import virtual machines using sparse allocation into an FC or iSCSI data center, the storage format must be converted to qcow2. This is achieved by passing the parameters -of qcow2 -oa sparse to virt-v2v . Note that converting between raw and qcow2 formats is a resource intensive operation, and roughly doubles the length of time taken for the conversion process. Important Preallocated qcow2 storage is never supported in Red Hat Enterprise Virtualization, although virt-v2v is able to write it. Import to Red Hat Enterprise Virtualization will fail. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/chap-v2v-vms_to_run_on_rhev |
Chapter 349. Tika Component | Chapter 349. Tika Component Available as of Camel version 2.19 The Tika : components provides the ability to detect and parse documents with Apache Tika. This component uses Apache Tika as underlying library to work with documents. In order to use the Tika component, Maven users will need to add the following dependency to their pom.xml : pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-tika</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> The TIKA component only supports producer endpoints. 349.1. Options The Tika component has no options. The Tika endpoint is configured using URI syntax: with the following path and query parameters: 349.1.1. Path Parameters (1 parameters): Name Description Default Type operation Required Tika Operation. parse or detect TikaOperation 349.1.2. Query Parameters (5 parameters): Name Description Default Type tikaConfig (producer) Tika Config TikaConfig tikaConfigUri (producer) Tika Config Uri: The URI of tika-config.xml String tikaParseOutputEncoding (producer) Tika Parse Output Encoding - Used to specify the character encoding of the parsed output. Defaults to Charset.defaultCharset() . String tikaParseOutputFormat (producer) Tika Output Format. Supported output formats. xml: Returns Parsed Content as XML. html: Returns Parsed Content as HTML. text: Returns Parsed Content as Text. textMain: Uses the boilerpipe library to automatically extract the main content from a web page. xml TikaParseOutputFormat synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 349.2. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.tika.enabled Enable tika component true Boolean camel.component.tika.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 349.3. To Detect a file's MIME Type The file should be placed in the Body. from("direct:start") .to("tika:detect"); 349.4. To Parse a File The file should be placed in the Body. from("direct:start") .to("tika:parse"); | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-tika</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"tika:operation",
"from(\"direct:start\") .to(\"tika:detect\");",
"from(\"direct:start\") .to(\"tika:parse\");"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/tika-component |
Chapter 2. Introduction to content creator workflows and automation execution environments | Chapter 2. Introduction to content creator workflows and automation execution environments 2.1. About content workflows Before Red Hat Ansible Automation Platform 2.0, an automation content developer may have needed so many Python virtual environments that they required their own automation in order to manage them. To reduce this level of complexity, Ansible Automation Platform 2.0 is moving away from virtual environments and using containers, referred to as automation execution environments, instead, as they are straightforward to build and manage and are more shareable across teams and orgs. As automation controller shifts to using automation execution environments, tools like automation content navigator and Ansible Builder ensure that you can take advantage of those automation execution environments locally within your own development system. Additional resources See the Automation Content Navigator Creator Guide for more on using automation content navigator. For more information on Ansible Builder, see Creating and Consuming Execution Environments . 2.2. Architecture overview The following list shows the arrangements and uses of tools available on Ansible Automation Platform 2.0, along with how they can be utilized: automation content navigator only - can be used today in Ansible Automation Platform 1.2 automation content navigator + downloaded automation execution environments - used directly on laptop/workstation automation content navigator + downloaded automation execution environments + automation controller - for pushing/executing locally remotely automation content navigator + automation controller + Ansible Builder + Layered custom EE - provides even more control over utilized content for how to execute automation jobs | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_creator_guide/assembly-introduction |
4.9. Mounting File Systems | 4.9. Mounting File Systems By default, when a file system that supports extended attributes is mounted, the security context for each file is obtained from the security.selinux extended attribute of the file. Files in file systems that do not support extended attributes are assigned a single, default security context from the policy configuration, based on file system type. Use the mount -o context command to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. This is useful if you do not trust a file system to supply the correct attributes, for example, removable media used in multiple systems. The mount -o context command can also be used to support labeling for file systems that do not support extended attributes, such as File Allocation Table (FAT) or NFS volumes. The context specified with the context option is not written to disk: the original contexts are preserved, and are seen when mounting without context if the file system had extended attributes in the first place. For further information about file system labeling, see James Morris's "Filesystem Labeling in SELinux" article: http://www.linuxjournal.com/article/7426 . 4.9.1. Context Mounts To mount a file system with the specified context, overriding existing contexts if they exist, or to specify a different, default context for a file system that does not support extended attributes, as the root user, use the mount -o context= SELinux_user:role:type:level command when mounting the required file system. Context changes are not written to disk. By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Without additional mount options, this may prevent sharing NFS volumes using other services, such as the Apache HTTP Server. The following example mounts an NFS volume so that it can be shared using the Apache HTTP Server: Newly-created files and directories on this file system appear to have the SELinux context specified with -o context . However, since these changes are not written to disk, the context specified with this option does not persist between mounts. Therefore, this option must be used with the same context specified during every mount to retain the required context. For information about making context mount persistent, see Section 4.9.5, "Making Context Mounts Persistent" . Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored, so, when overriding the SELinux context with -o context , use the SELinux system_u user and object_r role, and concentrate on the type. If you are not using the MLS policy or multi-category security, use the s0 level. Note When a file system is mounted with a context option, context changes by users and processes are prohibited. For example, running the chcon command on a file system mounted with a context option results in a Operation not supported error. 4.9.2. Changing the Default Context As mentioned in Section 4.8, "The file_t and default_t Types" , on file systems that support extended attributes, when a file that lacks an SELinux context on disk is accessed, it is treated as if it had a default context as defined by SELinux policy. In common policies, this default context uses the file_t type. If it is desirable to use a different default context, mount the file system with the defcontext option. The following example mounts a newly-created file system on /dev/sda2 to the newly-created test/ directory. It assumes that there are no rules in /etc/selinux/targeted/contexts/files/ that define a context for the test/ directory: In this example: the defcontext option defines that system_u:object_r:samba_share_t:s0 is "the default security context for unlabeled files" [5] . when mounted, the root directory ( test/ ) of the file system is treated as if it is labeled with the context specified by defcontext (this label is not stored on disk). This affects the labeling for files created under test/ : new files inherit the samba_share_t type, and these labels are stored on disk. files created under test/ while the file system was mounted with a defcontext option retain their labels. 4.9.3. Mounting an NFS Volume By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Depending on policy configuration, services, such as Apache HTTP Server and MariaDB, may not be able to read files labeled with the nfs_t type. This may prevent file systems labeled with this type from being mounted and then read or exported by other services. If you would like to mount an NFS volume and read or export that file system with another service, use the context option when mounting to override the nfs_t type. Use the following context option to mount NFS volumes so that they can be shared using the Apache HTTP Server: Since these changes are not written to disk, the context specified with this option does not persist between mounts. Therefore, this option must be used with the same context specified during every mount to retain the required context. For information about making context mount persistent, see Section 4.9.5, "Making Context Mounts Persistent" . As an alternative to mounting file systems with context options, Booleans can be enabled to allow services access to file systems labeled with the nfs_t type. See Part II, "Managing Confined Services" for instructions on configuring Booleans to allow services access to the nfs_t type. 4.9.4. Multiple NFS Mounts When mounting multiple mounts from the same NFS export, attempting to override the SELinux context of each mount with a different context, results in subsequent mount commands failing. In the following example, the NFS server has a single export, export/ , which has two subdirectories, web/ and database/ . The following commands attempt two mounts from a single NFS export, and try to override the context for each one: The second mount command fails, and the following is logged to /var/log/messages : To mount multiple mounts from a single NFS export, with each mount having a different context, use the -o nosharecache,context options. The following example mounts multiple mounts from a single NFS export, with a different context for each mount (allowing a single service access to each one): In this example, server:/export/web is mounted locally to the /local/web/ directory, with all files being labeled with the httpd_sys_content_t type, allowing Apache HTTP Server access. server:/export/database is mounted locally to /local/database/ , with all files being labeled with the mysqld_db_t type, allowing MariaDB access. These type changes are not written to disk. Important The nosharecache options allows you to mount the same subdirectory of an export multiple times with different contexts, for example, mounting /export/web/ multiple times. Do not mount the same subdirectory from an export multiple times with different contexts, as this creates an overlapping mount, where files are accessible under two different contexts. 4.9.5. Making Context Mounts Persistent To make context mounts persistent across remounting and reboots, add entries for the file systems in the /etc/fstab file or an automounter map, and use the required context as a mount option. The following example adds an entry to /etc/fstab for an NFS context mount: [5] Morris, James. "Filesystem Labeling in SELinux". Published 1 October 2004. Accessed 14 October 2008: http://www.linuxjournal.com/article/7426 . | [
"~]# mount server:/export /local/mount/point -o \\ context=\"system_u:object_r:httpd_sys_content_t:s0\"",
"~]# mount /dev/sda2 /test/ -o defcontext=\"system_u:object_r:samba_share_t:s0\"",
"~]# mount server:/export /local/mount/point -o context=\"system_u:object_r:httpd_sys_content_t:s0\"",
"~]# mount server:/export/web /local/web -o context=\"system_u:object_r:httpd_sys_content_t:s0\"",
"~]# mount server:/export/database /local/database -o context=\"system_u:object_r:mysqld_db_t:s0\"",
"kernel: SELinux: mount invalid. Same superblock, different security settings for (dev 0:15, type nfs)",
"~]# mount server:/export/web /local/web -o nosharecache,context=\"system_u:object_r:httpd_sys_content_t:s0\"",
"~]# mount server:/export/database /local/database -o \\ nosharecache,context=\"system_u:object_r:mysqld_db_t:s0\"",
"server:/export /local/mount/ nfs context=\"system_u:object_r:httpd_sys_content_t:s0\" 0 0"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-Mounting_File_Systems |
Chapter 4. action | Chapter 4. action This chapter describes the commands under the action command. 4.1. action definition create Create new action. Usage: Table 4.1. Positional Arguments Value Summary definition Action definition file Table 4.2. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --public With this flag action will be marked as "public". Table 4.3. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 4.4. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.5. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 4.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.2. action definition definition show Show action definition. Usage: Table 4.7. Positional Arguments Value Summary name Action name Table 4.8. Optional Arguments Value Summary -h, --help Show this help message and exit 4.3. action definition delete Delete action. Usage: Table 4.9. Positional Arguments Value Summary action Name or id of action(s). Table 4.10. Optional Arguments Value Summary -h, --help Show this help message and exit 4.4. action definition list List all actions. Usage: Table 4.11. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 4.12. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 4.13. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.14. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 4.15. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.5. action definition show Show specific action. Usage: Table 4.16. Positional Arguments Value Summary action Action (name or id) Table 4.17. Optional Arguments Value Summary -h, --help Show this help message and exit Table 4.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 4.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 4.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.6. action definition update Update action. Usage: Table 4.22. Positional Arguments Value Summary definition Action definition file Table 4.23. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --id ID Action id. --public With this flag action will be marked as "public". Table 4.24. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 4.25. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.26. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 4.27. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.7. action execution delete Delete action execution. Usage: Table 4.28. Positional Arguments Value Summary action_execution Id of action execution identifier(s). Table 4.29. Optional Arguments Value Summary -h, --help Show this help message and exit 4.8. action execution input show Show Action execution input data. Usage: Table 4.30. Positional Arguments Value Summary id Action execution id. Table 4.31. Optional Arguments Value Summary -h, --help Show this help message and exit 4.9. action execution list List all Action executions. Usage: Table 4.32. Positional Arguments Value Summary task_execution_id Task execution id. Table 4.33. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --oldest Display the executions starting from the oldest entries instead of the newest Table 4.34. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 4.35. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.36. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 4.37. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.10. action execution output show Show Action execution output data. Usage: Table 4.38. Positional Arguments Value Summary id Action execution id. Table 4.39. Optional Arguments Value Summary -h, --help Show this help message and exit 4.11. action execution run Create new Action execution or just run specific action. Usage: Table 4.40. Positional Arguments Value Summary name Action name to execute. input Action input. Table 4.41. Optional Arguments Value Summary -h, --help Show this help message and exit -s, --save-result Save the result into db. --run-sync Run the action synchronously. -t TARGET, --target TARGET Action will be executed on <target> executor. Table 4.42. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 4.43. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 4.44. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.45. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.12. action execution show Show specific Action execution. Usage: Table 4.46. Positional Arguments Value Summary action_execution Action execution id. Table 4.47. Optional Arguments Value Summary -h, --help Show this help message and exit Table 4.48. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 4.49. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 4.50. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.51. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.13. action execution update Update specific Action execution. Usage: Table 4.52. Positional Arguments Value Summary id Action execution id. Table 4.53. Optional Arguments Value Summary -h, --help Show this help message and exit --state {PAUSED,RUNNING,SUCCESS,ERROR,CANCELLED} Action execution state --output OUTPUT Action execution output Table 4.54. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 4.55. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 4.56. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.57. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack action definition create [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--public] definition",
"openstack action definition definition show [-h] name",
"openstack action definition delete [-h] action [action ...]",
"openstack action definition list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]",
"openstack action definition show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] action",
"openstack action definition update [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--id ID] [--public] definition",
"openstack action execution delete [-h] action_execution [action_execution ...]",
"openstack action execution input show [-h] id",
"openstack action execution list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--oldest] [task_execution_id]",
"openstack action execution output show [-h] id",
"openstack action execution run [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-s] [--run-sync] [-t TARGET] name [input]",
"openstack action execution show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] action_execution",
"openstack action execution update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--state {PAUSED,RUNNING,SUCCESS,ERROR,CANCELLED}] [--output OUTPUT] id"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/action |
4.4. Packaging the Image for Google Compute Engine | 4.4. Packaging the Image for Google Compute Engine Create a gzip sparse tar archive to package the image for Google Compute Engine, using the following command: | [
"tar -czSf disk.raw.tar.gz disk.raw"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/sect-documentation-deployment_guide_for_public_cloud-google_cloud_platform-package_image |
17.14. Applying Network Filtering | 17.14. Applying Network Filtering This section provides an introduction to libvirt's network filters, their goals, concepts and XML format. 17.14.1. Introduction The goal of the network filtering, is to enable administrators of a virtualized system to configure and enforce network traffic filtering rules on virtual machines and manage the parameters of network traffic that virtual machines are allowed to send or receive. The network traffic filtering rules are applied on the host physical machine when a virtual machine is started. Since the filtering rules cannot be circumvented from within the virtual machine, it makes them mandatory from the point of view of a virtual machine user. From the point of view of the guest virtual machine, the network filtering system allows each virtual machine's network traffic filtering rules to be configured individually on a per interface basis. These rules are applied on the host physical machine when the virtual machine is started and can be modified while the virtual machine is running. The latter can be achieved by modifying the XML description of a network filter. Multiple virtual machines can make use of the same generic network filter. When such a filter is modified, the network traffic filtering rules of all running virtual machines that reference this filter are updated. The machines that are not running will update on start. As previously mentioned, applying network traffic filtering rules can be done on individual network interfaces that are configured for certain types of network configurations. Supported network types include: network ethernet -- must be used in bridging mode bridge Example 17.1. An example of network filtering The interface XML is used to reference a top-level filter. In the following example, the interface description references the filter clean-traffic. Network filters are written in XML and may either contain: references to other filters, rules for traffic filtering, or hold a combination of both. The above referenced filter clean-traffic is a filter that only contains references to other filters and no actual filtering rules. Since references to other filters can be used, a tree of filters can be built. The clean-traffic filter can be viewed using the command: # virsh nwfilter-dumpxml clean-traffic . As previously mentioned, a single network filter can be referenced by multiple virtual machines. Since interfaces will typically have individual parameters associated with their respective traffic filtering rules, the rules described in a filter's XML can be generalized using variables. In this case, the variable name is used in the filter XML and the name and value are provided at the place where the filter is referenced. Example 17.2. Description extended In the following example, the interface description has been extended with the parameter IP and a dotted IP address as a value. In this particular example, the clean-traffic network traffic filter will be represented with the IP address parameter 10.0.0.1 and as per the rule dictates that all traffic from this interface will always be using 10.0.0.1 as the source IP address, which is one of the purpose of this particular filter. 17.14.2. Filtering Chains Filtering rules are organized in filter chains. These chains can be thought of as having a tree structure with packet filtering rules as entries in individual chains (branches). Packets start their filter evaluation in the root chain and can then continue their evaluation in other chains, return from those chains back into the root chain or be dropped or accepted by a filtering rule in one of the traversed chains. Libvirt's network filtering system automatically creates individual root chains for every virtual machine's network interface on which the user chooses to activate traffic filtering. The user may write filtering rules that are either directly instantiated in the root chain or may create protocol-specific filtering chains for efficient evaluation of protocol-specific rules. The following chains exist: root mac stp (spanning tree protocol) vlan arp and rarp ipv4 ipv6 Multiple chains evaluating the mac, stp, vlan, arp, rarp, ipv4, or ipv6 protocol can be created using the protocol name only as a prefix in the chain's name. Example 17.3. ARP traffic filtering This example allows chains with names arp-xyz or arp-test to be specified and have their ARP protocol packets evaluated in those chains. The following filter XML shows an example of filtering ARP traffic in the arp chain. The consequence of putting ARP-specific rules in the arp chain, rather than for example in the root chain, is that packets protocols other than ARP do not need to be evaluated by ARP protocol-specific rules. This improves the efficiency of the traffic filtering. However, one must then pay attention to only putting filtering rules for the given protocol into the chain since other rules will not be evaluated. For example, an IPv4 rule will not be evaluated in the ARP chain since IPv4 protocol packets will not traverse the ARP chain. 17.14.3. Filtering Chain Priorities As previously mentioned, when creating a filtering rule, all chains are connected to the root chain. The order in which those chains are accessed is influenced by the priority of the chain. The following table shows the chains that can be assigned a priority and their default priorities. Table 17.1. Filtering chain default priorities values Chain (prefix) Default priority stp -810 mac -800 vlan -750 ipv4 -700 ipv6 -600 arp -500 rarp -400 Note A chain with a lower priority value is accessed before one with a higher value. The chains listed in Table 17.1, "Filtering chain default priorities values" can be also be assigned custom priorities by writing a value in the range [-1000 to 1000] into the priority (XML) attribute in the filter node. Section 17.14.2, "Filtering Chains" filter shows the default priority of -500 for arp chains, for example. 17.14.4. Usage of Variables in Filters There are two variables that have been reserved for usage by the network traffic filtering subsystem: MAC and IP. MAC is designated for the MAC address of the network interface. A filtering rule that references this variable will automatically be replaced with the MAC address of the interface. This works without the user having to explicitly provide the MAC parameter. Even though it is possible to specify the MAC parameter similar to the IP parameter above, it is discouraged since libvirt knows what MAC address an interface will be using. The parameter IP represents the IP address that the operating system inside the virtual machine is expected to use on the given interface. The IP parameter is special in so far as the libvirt daemon will try to determine the IP address (and thus the IP parameter's value) that is being used on an interface if the parameter is not explicitly provided but referenced. For current limitations on IP address detection, consult the section on limitations Section 17.14.12, "Limitations" on how to use this feature and what to expect when using it. The XML file shown in Section 17.14.2, "Filtering Chains" contains the filter no-arp-spoofing , which is an example of using a network filter XML to reference the MAC and IP variables. Note that referenced variables are always prefixed with the character USD . The format of the value of a variable must be of the type expected by the filter attribute identified in the XML. In the above example, the IP parameter must hold a legal IP address in standard format. Failure to provide the correct structure will result in the filter variable not being replaced with a value and will prevent a virtual machine from starting or will prevent an interface from attaching when hot plugging is being used. Some of the types that are expected for each XML attribute are shown in the example Example 17.4, "Sample variable types" . Example 17.4. Sample variable types As variables can contain lists of elements, (the variable IP can contain multiple IP addresses that are valid on a particular interface, for example), the notation for providing multiple elements for the IP variable is: This XML file creates filters to enable multiple IP addresses per interface. Each of the IP addresses will result in a separate filtering rule. Therefore, using the XML above and the following rule, three individual filtering rules (one for each IP address) will be created: As it is possible to access individual elements of a variable holding a list of elements, a filtering rule like the following accesses the 2nd element of the variable DSTPORTS . Example 17.5. Using a variety of variables As it is possible to create filtering rules that represent all of the permissible rules from different lists using the notation USDVARIABLE[@<iterator id="x">] . The following rule allows a virtual machine to receive traffic on a set of ports, which are specified in DSTPORTS , from the set of source IP address specified in SRCIPADDRESSES . The rule generates all combinations of elements of the variable DSTPORTS with those of SRCIPADDRESSES by using two independent iterators to access their elements. Assign concrete values to SRCIPADDRESSES and DSTPORTS as shown: Assigning values to the variables using USDSRCIPADDRESSES[@1] and USDDSTPORTS[@2] would then result in all variants of addresses and ports being created as shown: 10.0.0.1, 80 10.0.0.1, 8080 11.1.2.3, 80 11.1.2.3, 8080 Accessing the same variables using a single iterator, for example by using the notation USDSRCIPADDRESSES[@1] and USDDSTPORTS[@1] , would result in parallel access to both lists and result in the following combination: 10.0.0.1, 80 11.1.2.3, 8080 Note USDVARIABLE is short-hand for USDVARIABLE[@0] . The former notation always assumes the role of iterator with iterator id="0" added as shown in the opening paragraph at the top of this section. 17.14.5. Automatic IP Address Detection and DHCP Snooping This section provides information about automatic IP address detection and DHCP snooping. 17.14.5.1. Introduction The detection of IP addresses used on a virtual machine's interface is automatically activated if the variable IP is referenced but no value has been assigned to it. The variable CTRL_IP_LEARNING can be used to specify the IP address learning method to use. Valid values include: any , dhcp , or none . The value any instructs libvirt to use any packet to determine the address in use by a virtual machine, which is the default setting if the variable CTRL_IP_LEARNING is not set. This method will only detect a single IP address per interface. Once a guest virtual machine's IP address has been detected, its IP network traffic will be locked to that address, if for example, IP address spoofing is prevented by one of its filters. In that case, the user of the VM will not be able to change the IP address on the interface inside the guest virtual machine, which would be considered IP address spoofing. When a guest virtual machine is migrated to another host physical machine or resumed after a suspend operation, the first packet sent by the guest virtual machine will again determine the IP address that the guest virtual machine can use on a particular interface. The value of dhcp instructs libvirt to only honor DHCP server-assigned addresses with valid leases. This method supports the detection and usage of multiple IP address per interface. When a guest virtual machine resumes after a suspend operation, any valid IP address leases are applied to its filters. Otherwise the guest virtual machine is expected to use DHCP to obtain a new IP addresses. When a guest virtual machine migrates to another physical host physical machine, the guest virtual machine is required to re-run the DHCP protocol. If CTRL_IP_LEARNING is set to none , libvirt does not do IP address learning and referencing IP without assigning it an explicit value is an error. 17.14.5.2. DHCP Snooping CTRL_IP_LEARNING= dhcp (DHCP snooping) provides additional anti-spoofing security, especially when combined with a filter allowing only trusted DHCP servers to assign IP addresses. To enable this, set the variable DHCPSERVER to the IP address of a valid DHCP server and provide filters that use this variable to filter incoming DHCP responses. When DHCP snooping is enabled and the DHCP lease expires, the guest virtual machine will no longer be able to use the IP address until it acquires a new, valid lease from a DHCP server. If the guest virtual machine is migrated, it must get a new valid DHCP lease to use an IP address (for example by bringing the VM interface down and up again). Note Automatic DHCP detection listens to the DHCP traffic the guest virtual machine exchanges with the DHCP server of the infrastructure. To avoid denial-of-service attacks on libvirt, the evaluation of those packets is rate-limited, meaning that a guest virtual machine sending an excessive number of DHCP packets per second on an interface will not have all of those packets evaluated and thus filters may not get adapted. Normal DHCP client behavior is assumed to send a low number of DHCP packets per second. Further, it is important to setup appropriate filters on all guest virtual machines in the infrastructure to avoid them being able to send DHCP packets. Therefore, guest virtual machines must either be prevented from sending UDP and TCP traffic from port 67 to port 68 or the DHCPSERVER variable should be used on all guest virtual machines to restrict DHCP server messages to only be allowed to originate from trusted DHCP servers. At the same time anti-spoofing prevention must be enabled on all guest virtual machines in the subnet. Example 17.6. Activating IPs for DHCP snooping The following XML provides an example for the activation of IP address learning using the DHCP snooping method: 17.14.6. Reserved Variables Table 17.2, "Reserved variables" shows the variables that are considered reserved and are used by libvirt: Table 17.2. Reserved variables Variable Name Definition MAC The MAC address of the interface IP The list of IP addresses in use by an interface IPV6 Not currently implemented: the list of IPV6 addresses in use by an interface DHCPSERVER The list of IP addresses of trusted DHCP servers DHCPSERVERV6 Not currently implemented: The list of IPv6 addresses of trusted DHCP servers CTRL_IP_LEARNING The choice of the IP address detection mode 17.14.7. Element and Attribute Overview The root element required for all network filters is named <filter> with two possible attributes. The name attribute provides a unique name of the given filter. The chain attribute is optional but allows certain filters to be better organized for more efficient processing by the firewall subsystem of the underlying host physical machine. Currently, the system only supports the following chains: root , ipv4 , ipv6 , arp and rarp . 17.14.8. References to Other Filters Any filter may hold references to other filters. Individual filters may be referenced multiple times in a filter tree but references between filters must not introduce loops. Example 17.7. An Example of a clean traffic filter The following shows the XML of the clean-traffic network filter referencing several other filters. To reference another filter, the XML node <filterref> needs to be provided inside a filter node. This node must have the attribute filter whose value contains the name of the filter to be referenced. New network filters can be defined at any time and may contain references to network filters that are not known to libvirt, yet. However, once a virtual machine is started or a network interface referencing a filter is to be hot-plugged, all network filters in the filter tree must be available. Otherwise the virtual machine will not start or the network interface cannot be attached. 17.14.9. Filter Rules The following XML shows a simple example of a network traffic filter implementing a rule to drop traffic if the IP address (provided through the value of the variable IP) in an outgoing IP packet is not the expected one, thus preventing IP address spoofing by the VM. Example 17.8. Example of network traffic filtering The traffic filtering rule starts with the rule node. This node may contain up to three of the following attributes: action is mandatory can have the following values: drop (matching the rule silently discards the packet with no further analysis) reject (matching the rule generates an ICMP reject message with no further analysis) accept (matching the rule accepts the packet with no further analysis) return (matching the rule passes this filter, but returns control to the calling filter for further analysis) continue (matching the rule goes on to the rule for further analysis) direction is mandatory can have the following values: in for incoming traffic out for outgoing traffic inout for incoming and outgoing traffic priority is optional. The priority of the rule controls the order in which the rule will be instantiated relative to other rules. Rules with lower values will be instantiated before rules with higher values. Valid values are in the range of -1000 to 1000. If this attribute is not provided, priority 500 will be assigned by default. Note that filtering rules in the root chain are sorted with filters connected to the root chain following their priorities. This allows to interleave filtering rules with access to filter chains. See Section 17.14.3, "Filtering Chain Priorities" for more information. statematch is optional. Possible values are '0' or 'false' to turn the underlying connection state matching off. The default setting is 'true' or 1 For more information, see Section 17.14.11, "Advanced Filter Configuration Topics" . The above example Example 17.7, "An Example of a clean traffic filter" indicates that the traffic of type ip will be associated with the chain ipv4 and the rule will have priority= 500 . If for example another filter is referenced whose traffic of type ip is also associated with the chain ipv4 then that filter's rules will be ordered relative to the priority= 500 of the shown rule. A rule may contain a single rule for filtering of traffic. The above example shows that traffic of type ip is to be filtered. 17.14.10. Supported Protocols The following sections list and give some details about the protocols that are supported by the network filtering subsystem. This type of traffic rule is provided in the rule node as a nested node. Depending on the traffic type a rule is filtering, the attributes are different. The above example showed the single attribute srcipaddr that is valid inside the ip traffic filtering node. The following sections show what attributes are valid and what type of data they are expecting. The following datatypes are available: UINT8 : 8 bit integer; range 0-255 UINT16: 16 bit integer; range 0-65535 MAC_ADDR: MAC address in dotted decimal format, for example 00:11:22:33:44:55 MAC_MASK: MAC address mask in MAC address format, for instance, FF:FF:FF:FC:00:00 IP_ADDR: IP address in dotted decimal format, for example 10.1.2.3 IP_MASK: IP address mask in either dotted decimal format (255.255.248.0) or CIDR mask (0-32) IPV6_ADDR: IPv6 address in numbers format, for example FFFF::1 IPV6_MASK: IPv6 mask in numbers format (FFFF:FFFF:FC00::) or CIDR mask (0-128) STRING: A string BOOLEAN: 'true', 'yes', '1' or 'false', 'no', '0' IPSETFLAGS: The source and destination flags of the ipset described by up to 6 'src' or 'dst' elements selecting features from either the source or destination part of the packet header; example: src,src,dst. The number of 'selectors' to provide here depends on the type of ipset that is referenced Every attribute except for those of type IP_MASK or IPV6_MASK can be negated using the match attribute with value no . Multiple negated attributes may be grouped together. The following XML fragment shows such an example using abstract attributes. Rules behave evaluate the rule as well as look at it logically within the boundaries of the given protocol attributes. Thus, if a single attribute's value does not match the one given in the rule, the whole rule will be skipped during the evaluation process. Therefore, in the above example incoming traffic will only be dropped if: the protocol property attribute1 does not match both value1 and the protocol property attribute2 does not match value2 and the protocol property attribute3 matches value3 . 17.14.10.1. MAC (Ethernet) Protocol ID: mac Rules of this type should go into the root chain. Table 17.3. MAC protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination protocolid UINT16 (0x600-0xffff), STRING Layer 3 protocol ID. Valid strings include [arp, rarp, ipv4, ipv6] comment STRING text string up to 256 characters The filter can be written as such: 17.14.10.2. VLAN (802.1Q) Protocol ID: vlan Rules of this type should go either into the root or vlan chain. Table 17.4. VLAN protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination vlan-id UINT16 (0x0-0xfff, 0 - 4095) VLAN ID encap-protocol UINT16 (0x03c-0xfff), String Encapsulated layer 3 protocol ID, valid strings are arp, ipv4, ipv6 comment STRING text string up to 256 characters 17.14.10.3. STP (Spanning Tree Protocol) Protocol ID: stp Rules of this type should go either into the root or stp chain. Table 17.5. STP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender type UINT8 Bridge Protocol Data Unit (BPDU) type flags UINT8 BPDU flagdstmacmask root-priority UINT16 Root priority range start root-priority-hi UINT16 (0x0-0xfff, 0 - 4095) Root priority range end root-address MAC _ADDRESS root MAC Address root-address-mask MAC _MASK root MAC Address mask roor-cost UINT32 Root path cost (range start) root-cost-hi UINT32 Root path cost range end sender-priority-hi UINT16 Sender priority range end sender-address MAC_ADDRESS BPDU sender MAC address sender-address-mask MAC_MASK BPDU sender MAC address mask port UINT16 Port identifier (range start) port_hi UINT16 Port identifier range end msg-age UINT16 Message age timer (range start) msg-age-hi UINT16 Message age timer range end max-age-hi UINT16 Maximum age time range end hello-time UINT16 Hello time timer (range start) hello-time-hi UINT16 Hello time timer range end forward-delay UINT16 Forward delay (range start) forward-delay-hi UINT16 Forward delay range end comment STRING text string up to 256 characters 17.14.10.4. ARP/RARP Protocol ID: arp or rarp Rules of this type should either go into the root or arp/rarp chain. Table 17.6. ARP and RARP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination hwtype UINT16 Hardware type protocoltype UINT16 Protocol type opcode UINT16, STRING Opcode valid strings are: Request, Reply, Request_Reverse, Reply_Reverse, DRARP_Request, DRARP_Reply, DRARP_Error, InARP_Request, ARP_NAK arpsrcmacaddr MAC_ADDR Source MAC address in ARP/RARP packet arpdstmacaddr MAC _ADDR Destination MAC address in ARP/RARP packet arpsrcipaddr IP_ADDR Source IP address in ARP/RARP packet arpdstipaddr IP_ADDR Destination IP address in ARP/RARP packet gratuitous BOOLEAN Boolean indicating whether to check for a gratuitous ARP packet comment STRING text string up to 256 characters 17.14.10.5. IPv4 Protocol ID: ip Rules of this type should either go into the root or ipv4 chain. Table 17.7. IPv4 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address protocol UINT8, STRING Layer 4 protocol identifier. Valid strings for protocol are: tcp, udp, udplite, esp, ah, icmp, igmp, sctp srcportstart UINT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters 17.14.10.6. IPv6 Protocol ID: ipv6 Rules of this type should either go into the root or ipv6 chain. Table 17.8. IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address protocol UINT8, STRING Layer 4 protocol identifier. Valid strings for protocol are: tcp, udp, udplite, esp, ah, icmpv6, sctp scrportstart UNIT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters 17.14.10.7. TCP/UDP/SCTP Protocol ID: tcp, udp, sctp The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.9. TCP/UDP/SCTP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address scripto IP_ADDR Start of range of source IP address srcipfrom IP_ADDR End of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address scrportstart UNIT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE flags STRING TCP-only: format of mask/flags with mask and flags each being a comma separated list of SYN,ACK,URG,PSH,FIN,RST or NONE or ALL ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.8. ICMP Protocol ID: icmp Note: The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.10. ICMP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to the MAC address of the sender dstmacaddr MAD_ADDR MAC address of the destination dstmacmask MAC_MASK Mask applied to the MAC address of the destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address type UNIT16 ICMP type code UNIT16 ICMP code comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.9. IGMP, ESP, AH, UDPLITE, 'ALL' Protocol ID: igmp, esp, ah, udplite, all The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.11. IGMP, ESP, AH, UDPLITE, 'ALL' Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to the MAC address of the sender dstmacaddr MAD_ADDR MAC address of the destination dstmacmask MAC_MASK Mask applied to the MAC address of the destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.10. TCP/UDP/SCTP over IPV6 Protocol ID: tcp-ipv6, udp-ipv6, sctp-ipv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.12. TCP, UDP, SCTP over IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address srcportstart UINT16 Start of range of valid source ports srcportend UINT16 End of range of valid source ports dstportstart UINT16 Start of range of valid destination ports dstportend UINT16 End of range of valid destination ports comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.11. ICMPv6 Protocol ID: icmpv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.13. ICMPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address type UINT16 ICMPv6 type code UINT16 ICMPv6 code comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.12. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv6 Protocol ID: igmp-ipv6, esp-ipv6, ah-ipv6, udplite-ipv6, all-ipv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.14. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.11. Advanced Filter Configuration Topics The following sections discuss advanced filter configuration topics. 17.14.11.1. Connection tracking The network filtering subsystem (on Linux) makes use of the connection tracking support of IP tables. This helps in enforcing the direction of the network traffic (state match) as well as counting and limiting the number of simultaneous connections towards a guest virtual machine. As an example, if a guest virtual machine has TCP port 8080 open as a server, clients may connect to the guest virtual machine on port 8080. Connection tracking and enforcement of the direction and then prevents the guest virtual machine from initiating a connection from (TCP client) port 8080 to the host physical machine back to a remote host physical machine. More importantly, tracking helps to prevent remote attackers from establishing a connection back to a guest virtual machine. For example, if the user inside the guest virtual machine established a connection to port 80 on an attacker site, the attacker will not be able to initiate a connection from TCP port 80 back towards the guest virtual machine. By default the connection state match that enables connection tracking and then enforcement of the direction of traffic is turned on. Example 17.9. XML example for turning off connections to the TCP port The following shows an example XML fragment where this feature has been turned off for incoming connections to TCP port 12345. This now allows incoming traffic to TCP port 12345, but would also enable the initiation from (client) TCP port 12345 within the VM, which may or may not be desirable. 17.14.11.2. Limiting number of connections To limit the number of connections a guest virtual machine may establish, a rule must be provided that sets a limit of connections for a given type of traffic. If for example a VM is supposed to be allowed to only ping one other IP address at a time and is supposed to have only one active incoming ssh connection at a time. Example 17.10. XML sample file that sets limits to connections The following XML fragment can be used to limit connections Note Limitation rules must be listed in the XML prior to the rules for accepting traffic. According to the XML file in Example 17.10, "XML sample file that sets limits to connections" , an additional rule for allowing DNS traffic sent to port 22 go out the guest virtual machine, has been added to avoid ssh sessions not getting established for reasons related to DNS lookup failures by the ssh daemon. Leaving this rule out may result in the ssh client hanging unexpectedly as it tries to connect. Additional caution should be used in regards to handling timeouts related to tracking of traffic. An ICMP ping that the user may have terminated inside the guest virtual machine may have a long timeout in the host physical machine's connection tracking system and will therefore not allow another ICMP ping to go through. The best solution is to tune the timeout in the host physical machine's sysfs with the following command:# echo 3 > /proc/sys/net/netfilter/nf_conntrack_icmp_timeout . This command sets the ICMP connection tracking timeout to 3 seconds. The effect of this is that once one ping is terminated, another one can start after 3 seconds. If for any reason the guest virtual machine has not properly closed its TCP connection, the connection to be held open for a longer period of time, especially if the TCP timeout value was set for a large amount of time on the host physical machine. In addition, any idle connection may result in a timeout in the connection tracking system which can be re-activated once packets are exchanged. However, if the limit is set too low, newly initiated connections may force an idle connection into TCP backoff. Therefore, the limit of connections should be set rather high so that fluctuations in new TCP connections do not cause odd traffic behavior in relation to idle connections. 17.14.11.3. Command-line tools virsh has been extended with life-cycle support for network filters. All commands related to the network filtering subsystem start with the prefix nwfilter . The following commands are available: nwfilter-list : lists UUIDs and names of all network filters nwfilter-define : defines a new network filter or updates an existing one (must supply a name) nwfilter-undefine : deletes a specified network filter (must supply a name). Do not delete a network filter currently in use. nwfilter-dumpxml : displays a specified network filter (must supply a name) nwfilter-edit : edits a specified network filter (must supply a name) 17.14.11.4. Pre-existing network filters The following is a list of example network filters that are automatically installed with libvirt: Table 17.15. ICMPv6 protocol types Protocol Name Description allow-arp Accepts all incoming and outgoing Address Resolution Protocol (ARP) traffic to a guest virtual machine. no-arp-spoofing , no-arp-mac-spoofing , and no-arp-ip-spoofing These filters prevent a guest virtual machine from spoofing ARP traffic. In addition, they only allows ARP request and reply messages, and enforce that those packets contain: no-arp-spoofing - the MAC and IP addresses of the guest no-arp-mac-spoofing - the MAC address of the guest no-arp-ip-spoofing - the IP address of the guest low-dhcp Allows a guest virtual machine to request an IP address via DHCP (from any DHCP server). low-dhcp-server Allows a guest virtual machine to request an IP address from a specified DHCP server. The dotted decimal IP address of the DHCP server must be provided in a reference to this filter. The name of the variable must be DHCPSERVER . low-ipv4 Accepts all incoming and outgoing IPv4 traffic to a virtual machine. low-incoming-ipv4 Accepts only incoming IPv4 traffic to a virtual machine. This filter is a part of the clean-traffic filter. no-ip-spoofing Prevents a guest virtual machine from sending IP packets with a source IP address different from the one inside the packet. This filter is a part of the clean-traffic filter. no-ip-multicast Prevents a guest virtual machine from sending IP multicast packets. no-mac-broadcast Prevents outgoing IPv4 traffic to a specified MAC address. This filter is a part of the clean-traffic filter. no-other-l2-traffic Prevents all layer 2 networking traffic except traffic specified by other filters used by the network. This filter is a part of the clean-traffic filter. no-other-rarp-traffic , qemu-announce-self , qemu-announce-self-rarp These filters allow QEMU's self-announce Reverse Address Resolution Protocol (RARP) packets, but prevent all other RARP traffic. All of them are also included in the clean-traffic filter. clean-traffic Prevents MAC, IP and ARP spoofing. This filter references several other filters as building blocks. These filters are only building blocks and require a combination with other filters to provide useful network traffic filtering. The most used one in the above list is the clean-traffic filter. This filter itself can for example be combined with the no-ip-multicast filter to prevent virtual machines from sending IP multicast traffic on top of the prevention of packet spoofing. 17.14.11.5. Writing your own filters Since libvirt only provides a couple of example networking filters, you may consider writing your own. When planning on doing so there are a couple of things you may need to know regarding the network filtering subsystem and how it works internally. Certainly you also have to know and understand the protocols very well that you want to be filtering on so that no further traffic than what you want can pass and that in fact the traffic you want to allow does pass. The network filtering subsystem is currently only available on Linux host physical machines and only works for QEMU and KVM type of virtual machines. On Linux, it builds upon the support for ebtables, iptables and ip6tables and makes use of their features. Considering the list found in Section 17.14.10, "Supported Protocols" the following protocols can be implemented using ebtables: mac stp (spanning tree protocol) vlan (802.1Q) arp, rarp ipv4 ipv6 Any protocol that runs over IPv4 is supported using iptables, those over IPv6 are implemented using ip6tables. Using a Linux host physical machine, all traffic filtering rules created by libvirt's network filtering subsystem first passes through the filtering support implemented by ebtables and only afterwards through iptables or ip6tables filters. If a filter tree has rules with the protocols including: mac, stp, vlan arp, rarp, ipv4, or ipv6; the ebtable rules and values listed will automatically be used first. Multiple chains for the same protocol can be created. The name of the chain must have a prefix of one of the previously enumerated protocols. To create an additional chain for handling of ARP traffic, a chain with name arp-test, can for example be specified. As an example, it is possible to filter on UDP traffic by source and destination ports using the ip protocol filter and specifying attributes for the protocol, source and destination IP addresses and ports of UDP packets that are to be accepted. This allows early filtering of UDP traffic with ebtables. However, once an IP or IPv6 packet, such as a UDP packet, has passed the ebtables layer and there is at least one rule in a filter tree that instantiates iptables or ip6tables rules, a rule to let the UDP packet pass will also be necessary to be provided for those filtering layers. This can be achieved with a rule containing an appropriate udp or udp-ipv6 traffic filtering node. Example 17.11. Creating a custom filter Suppose a filter is needed to fulfill the following list of requirements: prevents a VM's interface from MAC, IP and ARP spoofing opens only TCP ports 22 and 80 of a VM's interface allows the VM to send ping traffic from an interface but not let the VM be pinged on the interface allows the VM to do DNS lookups (UDP towards port 53) The requirement to prevent spoofing is fulfilled by the existing clean-traffic network filter, thus the way to do this is to reference it from a custom filter. To enable traffic for TCP ports 22 and 80, two rules are added to enable this type of traffic. To allow the guest virtual machine to send ping traffic a rule is added for ICMP traffic. For simplicity reasons, general ICMP traffic will be allowed to be initiated from the guest virtual machine, and will not be specified to ICMP echo request and response messages. All other traffic will be prevented to reach or be initiated by the guest virtual machine. To do this a rule will be added that drops all other traffic. Assuming the guest virtual machine is called test and the interface to associate our filter with is called eth0 , a filter is created named test-eth0 . The result of these considerations is the following network filter XML: 17.14.11.6. Sample custom filter Although one of the rules in the above XML contains the IP address of the guest virtual machine as either a source or a destination address, the filtering of the traffic works correctly. The reason is that whereas the rule's evaluation occurs internally on a per-interface basis, the rules are additionally evaluated based on which (tap) interface has sent or will receive the packet, rather than what their source or destination IP address may be. Example 17.12. Sample XML for network interface descriptions An XML fragment for a possible network interface description inside the domain XML of the test guest virtual machine could then look like this: To more strictly control the ICMP traffic and enforce that only ICMP echo requests can be sent from the guest virtual machine and only ICMP echo responses be received by the guest virtual machine, the above ICMP rule can be replaced with the following two rules: Example 17.13. Second example custom filter This example demonstrates how to build a similar filter as in the example above, but extends the list of requirements with an ftp server located inside the guest virtual machine. The requirements for this filter are: prevents a guest virtual machine's interface from MAC, IP, and ARP spoofing opens only TCP ports 22 and 80 in a guest virtual machine's interface allows the guest virtual machine to send ping traffic from an interface but does not allow the guest virtual machine to be pinged on the interface allows the guest virtual machine to do DNS lookups (UDP towards port 53) enables the ftp server (in active mode) so it can run inside the guest virtual machine The additional requirement of allowing an FTP server to be run inside the guest virtual machine maps into the requirement of allowing port 21 to be reachable for FTP control traffic as well as enabling the guest virtual machine to establish an outgoing TCP connection originating from the guest virtual machine's TCP port 20 back to the FTP client (FTP active mode). There are several ways of how this filter can be written and two possible solutions are included in this example. The first solution makes use of the state attribute of the TCP protocol that provides a hook into the connection tracking framework of the Linux host physical machine. For the guest virtual machine-initiated FTP data connection (FTP active mode) the RELATED state is used to enable detection that the guest virtual machine-initiated FTP data connection is a consequence of ( or 'has a relationship with' ) an existing FTP control connection, thereby allowing it to pass packets through the firewall. The RELATED state, however, is only valid for the very first packet of the outgoing TCP connection for the FTP data path. Afterwards, the state is ESTABLISHED, which then applies equally to the incoming and outgoing direction. All this is related to the FTP data traffic originating from TCP port 20 of the guest virtual machine. This then leads to the following solution: Before trying out a filter using the RELATED state, you have to make sure that the appropriate connection tracking module has been loaded into the host physical machine's kernel. Depending on the version of the kernel, you must run either one of the following two commands before the FTP connection with the guest virtual machine is established: modprobe nf_conntrack_ftp - where available OR modprobe ip_conntrack_ftp if above is not available If protocols other than FTP are used in conjunction with the RELATED state, their corresponding module must be loaded. Modules are available for the protocols: ftp, tftp, irc, sip, sctp, and amanda. The second solution makes use of the state flags of connections more than the solution did. This solution takes advantage of the fact that the NEW state of a connection is valid when the very first packet of a traffic flow is detected. Subsequently, if the very first packet of a flow is accepted, the flow becomes a connection and thus enters into the ESTABLISHED state. Therefore, a general rule can be written for allowing packets of ESTABLISHED connections to reach the guest virtual machine or be sent by the guest virtual machine. This is done writing specific rules for the very first packets identified by the NEW state and dictates the ports that the data is acceptable. All packets meant for ports that are not explicitly accepted are dropped, thus not reaching an ESTABLISHED state. Any subsequent packets sent from that port are dropped as well. 17.14.12. Limitations The following is a list of the currently known limitations of the network filtering subsystem. VM migration is only supported if the whole filter tree that is referenced by a guest virtual machine's top level filter is also available on the target host physical machine. The network filter clean-traffic for example should be available on all libvirt installations and thus enable migration of guest virtual machines that reference this filter. To assure version compatibility is not a problem make sure you are using the most current version of libvirt by updating the package regularly. Migration must occur between libvirt insallations of version 0.8.1 or later in order not to lose the network traffic filters associated with an interface. VLAN (802.1Q) packets, if sent by a guest virtual machine, cannot be filtered with rules for protocol IDs arp, rarp, ipv4 and ipv6. They can only be filtered with protocol IDs, MAC and VLAN. Therefore, the example filter clean-traffic Example 17.1, "An example of network filtering" will not work as expected. | [
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'/> </interface> </devices>",
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'> <parameter name='IP' value='10.0.0.1'/> </filterref> </interface> </devices>",
"<filter name='no-arp-spoofing' chain='arp' priority='-500'> <uuid>f88f1932-debf-4aa1-9fbe-f10d3aa4bc95</uuid> <rule action='drop' direction='out' priority='300'> <mac match='no' srcmacaddr='USDMAC'/> </rule> <rule action='drop' direction='out' priority='350'> <arp match='no' arpsrcmacaddr='USDMAC'/> </rule> <rule action='drop' direction='out' priority='400'> <arp match='no' arpsrcipaddr='USDIP'/> </rule> <rule action='drop' direction='in' priority='450'> <arp opcode='Reply'/> <arp match='no' arpdstmacaddr='USDMAC'/> </rule> <rule action='drop' direction='in' priority='500'> <arp match='no' arpdstipaddr='USDIP'/> </rule> <rule action='accept' direction='inout' priority='600'> <arp opcode='Request'/> </rule> <rule action='accept' direction='inout' priority='650'> <arp opcode='Reply'/> </rule> <rule action='drop' direction='inout' priority='1000'/> </filter>",
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'> <parameter name='IP' value='10.0.0.1'/> <parameter name='IP' value='10.0.0.2'/> <parameter name='IP' value='10.0.0.3'/> </filterref> </interface> </devices>",
"<rule action='accept' direction='in' priority='500'> <tcp srpipaddr='USDIP'/> </rule>",
"<rule action='accept' direction='in' priority='500'> <udp dstportstart='USDDSTPORTS[1]'/> </rule>",
"<rule action='accept' direction='in' priority='500'> <ip srcipaddr='USDSRCIPADDRESSES[@1]' dstportstart='USDDSTPORTS[@2]'/> </rule>",
"SRCIPADDRESSES = [ 10.0.0.1, 11.1.2.3 ] DSTPORTS = [ 80, 8080 ]",
"<interface type='bridge'> <source bridge='virbr0'/> <filterref filter='clean-traffic'> <parameter name='CTRL_IP_LEARNING' value='dhcp'/> </filterref> </interface>",
"<filter name='clean-traffic'> <uuid>6ef53069-ba34-94a0-d33d-17751b9b8cb1</uuid> <filterref filter='no-mac-spoofing'/> <filterref filter='no-ip-spoofing'/> <filterref filter='allow-incoming-ipv4'/> <filterref filter='no-arp-spoofing'/> <filterref filter='no-other-l2-traffic'/> <filterref filter='qemu-announce-self'/> </filter>",
"<filter name='no-ip-spoofing' chain='ipv4'> <uuid>fce8ae33-e69e-83bf-262e-30786c1f8072</uuid> <rule action='drop' direction='out' priority='500'> <ip match='no' srcipaddr='USDIP'/> </rule> </filter>",
"[...] <rule action='drop' direction='in'> <protocol match='no' attribute1='value1' attribute2='value2'/> <protocol attribute3='value3'/> </rule> [...]",
"[...] <mac match='no' srcmacaddr='USDMAC'/> [...]",
"[...] <rule direction='in' action='accept' statematch='false'> <cp dstportstart='12345'/> </rule> [...]",
"[...] <rule action='drop' direction='in' priority='400'> <tcp connlimit-above='1'/> </rule> <rule action='accept' direction='in' priority='500'> <tcp dstportstart='22'/> </rule> <rule action='drop' direction='out' priority='400'> <icmp connlimit-above='1'/> </rule> <rule action='accept' direction='out' priority='500'> <icmp/> </rule> <rule action='accept' direction='out' priority='500'> <udp dstportstart='53'/> </rule> <rule action='drop' direction='inout' priority='1000'> <all/> </rule> [...]",
"<filter name='test-eth0'> <!- - This rule references the clean traffic filter to prevent MAC, IP and ARP spoofing. By not providing an IP address parameter, libvirt will detect the IP address the guest virtual machine is using. - -> <filterref filter='clean-traffic'/> <!- - This rule enables TCP ports 22 (ssh) and 80 (http) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='22'/> </rule> <rule action='accept' direction='in'> <tcp dstportstart='80'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine including ping traffic - -> <rule action='accept' direction='out'> <icmp/> </rule>> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>",
"[...] <interface type='bridge'> <source bridge='mybridge'/> <filterref filter='test-eth0'/> </interface> [...]",
"<!- - enable outgoing ICMP echo requests- -> <rule action='accept' direction='out'> <icmp type='8'/> </rule>",
"<!- - enable incoming ICMP echo replies- -> <rule action='accept' direction='in'> <icmp type='0'/> </rule>",
"<filter name='test-eth0'> <!- - This filter (eth0) references the clean traffic filter to prevent MAC, IP, and ARP spoofing. By not providing an IP address parameter, libvirt will detect the IP address the guest virtual machine is using. - -> <filterref filter='clean-traffic'/> <!- - This rule enables TCP port 21 (FTP-control) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='21'/> </rule> <!- - This rule enables TCP port 20 for guest virtual machine-initiated FTP data connection related to an existing FTP control connection - -> <rule action='accept' direction='out'> <tcp srcportstart='20' state='RELATED,ESTABLISHED'/> </rule> <!- - This rule accepts all packets from a client on the FTP data connection - -> <rule action='accept' direction='in'> <tcp dstportstart='20' state='ESTABLISHED'/> </rule> <!- - This rule enables TCP port 22 (SSH) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='22'/> </rule> <!- -This rule enables TCP port 80 (HTTP) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='80'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine, including ping traffic - -> <rule action='accept' direction='out'> <icmp/> </rule> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>",
"<filter name='test-eth0'> <!- - This filter references the clean traffic filter to prevent MAC, IP and ARP spoofing. By not providing and IP address parameter, libvirt will detect the IP address the VM is using. - -> <filterref filter='clean-traffic'/> <!- - This rule allows the packets of all previously accepted connections to reach the guest virtual machine - -> <rule action='accept' direction='in'> <all state='ESTABLISHED'/> </rule> <!- - This rule allows the packets of all previously accepted and related connections be sent from the guest virtual machine - -> <rule action='accept' direction='out'> <all state='ESTABLISHED,RELATED'/> </rule> <!- - This rule enables traffic towards port 21 (FTP) and port 22 (SSH)- -> <rule action='accept' direction='in'> <tcp dstportstart='21' dstportend='22' state='NEW'/> </rule> <!- - This rule enables traffic towards port 80 (HTTP) - -> <rule action='accept' direction='in'> <tcp dstportstart='80' state='NEW'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine, including ping traffic - -> <rule action='accept' direction='out'> <icmp state='NEW'/> </rule> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53' state='NEW'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Virtual_Networking-Applying_network_filtering |
Chapter 14. ImageTagMirrorSet [config.openshift.io/v1] | Chapter 14. ImageTagMirrorSet [config.openshift.io/v1] Description ImageTagMirrorSet holds cluster-wide information about how to handle registry mirror rules on using tag pull specification. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status contains the observed state of the resource. 14.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description imageTagMirrors array imageTagMirrors allows images referenced by image tags in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageTagMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using digest specification only, users should configure a list of mirrors using "ImageDigestMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagetagmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a deterministic order of mirrors, should configure them into one list of mirrors using the expected order. imageTagMirrors[] object ImageTagMirrors holds cluster-wide information about how to handle mirrors in the registries config. 14.1.2. .spec.imageTagMirrors Description imageTagMirrors allows images referenced by image tags in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageTagMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using digest specification only, users should configure a list of mirrors using "ImageDigestMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagetagmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a deterministic order of mirrors, should configure them into one list of mirrors using the expected order. Type array 14.1.3. .spec.imageTagMirrors[] Description ImageTagMirrors holds cluster-wide information about how to handle mirrors in the registries config. Type object Required source Property Type Description mirrorSourcePolicy string mirrorSourcePolicy defines the fallback policy if fails to pull image from the mirrors. If unset, the image will continue to be pulled from the repository in the pull spec. sourcePolicy is valid configuration only when one or more mirrors are in the mirror list. mirrors array (string) mirrors is zero or more locations that may also contain the same images. No mirror will be configured if not specified. Images can be pulled from these mirrors only if they are referenced by their tags. The mirrored location is obtained by replacing the part of the input reference that matches source by the mirrors entry, e.g. for registry.redhat.io/product/repo reference, a (source, mirror) pair *.redhat.io, mirror.local/redhat causes a mirror.local/redhat/product/repo repository to be used. Pulling images by tag can potentially yield different images, depending on which endpoint we pull from. Configuring a list of mirrors using "ImageDigestMirrorSet" CRD and forcing digest-pulls for mirrors avoids that issue. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. If no mirror is specified or all image pulls from the mirror list fail, the image will continue to be pulled from the repository in the pull spec unless explicitly prohibited by "mirrorSourcePolicy". Other cluster configuration, including (but not limited to) other imageTagMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. "mirrors" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table source string source matches the repository that users refer to, e.g. in image pull specifications. Setting source to a registry hostname e.g. docker.io. quay.io, or registry.redhat.io, will match the image pull specification of corressponding registry. "source" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo [*.]host for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table 14.1.4. .status Description status contains the observed state of the resource. Type object 14.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/imagetagmirrorsets DELETE : delete collection of ImageTagMirrorSet GET : list objects of kind ImageTagMirrorSet POST : create an ImageTagMirrorSet /apis/config.openshift.io/v1/imagetagmirrorsets/{name} DELETE : delete an ImageTagMirrorSet GET : read the specified ImageTagMirrorSet PATCH : partially update the specified ImageTagMirrorSet PUT : replace the specified ImageTagMirrorSet /apis/config.openshift.io/v1/imagetagmirrorsets/{name}/status GET : read status of the specified ImageTagMirrorSet PATCH : partially update status of the specified ImageTagMirrorSet PUT : replace status of the specified ImageTagMirrorSet 14.2.1. /apis/config.openshift.io/v1/imagetagmirrorsets HTTP method DELETE Description delete collection of ImageTagMirrorSet Table 14.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageTagMirrorSet Table 14.2. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSetList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageTagMirrorSet Table 14.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.4. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.5. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 202 - Accepted ImageTagMirrorSet schema 401 - Unauthorized Empty 14.2.2. /apis/config.openshift.io/v1/imagetagmirrorsets/{name} Table 14.6. Global path parameters Parameter Type Description name string name of the ImageTagMirrorSet HTTP method DELETE Description delete an ImageTagMirrorSet Table 14.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageTagMirrorSet Table 14.9. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageTagMirrorSet Table 14.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.11. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageTagMirrorSet Table 14.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.13. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.14. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 401 - Unauthorized Empty 14.2.3. /apis/config.openshift.io/v1/imagetagmirrorsets/{name}/status Table 14.15. Global path parameters Parameter Type Description name string name of the ImageTagMirrorSet HTTP method GET Description read status of the specified ImageTagMirrorSet Table 14.16. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageTagMirrorSet Table 14.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.18. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageTagMirrorSet Table 14.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.20. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.21. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/imagetagmirrorset-config-openshift-io-v1 |
10. Kernel | 10. Kernel Kdump Auto Enablement Kdump is now enabled by default on systems with large amounts of memory. Specifically, kdump is enabled by default on: systems with more than 4GB of memory on architectures with a 4KB page size (i.e. x86 or x86_64), or systems with more than 8GB of memory on architectures with larger than a 4KB page size (i.e PPC64). On systems with less than the above memory configurations, kdump is not auto enabled. Refer to /usr/share/doc/kexec-tools-2.0.0/kexec-kdump-howto.txt for instructions on enabling kdump on these systems. crashkernel parameter syntax Please note that in future versions of Red Hat Enterprise Linux 6 (i.e. Red Hat Enterprise Linux 6.1 and later) the auto value setting of the crashkernel= parameter (i.e. crashkernel=auto ) will be deprecated. Barrier Implementation in the Kernel The barrier implementation in the Red Hat Enterprise Linux 6 kernel works by completely draining the I/O scheduler's queue, then issuing a preflush, a barrier, and finally a postflush request. However, since the supported file systems in Red Hat Enterprise Linux 6 all implement their own ordering guarantees, the block layer need only provide a mechanism to ensure that a barrier request is ordered with respect to other I/O already in the disk cache. This mechanism avoids I/O stalls experienced by queue draining. The block layer will be updated in future kernels to provide this more efficient mechanism of ensuring ordering. Workloads that include heavy fsync or metadata activity will see an overall improvement in disk performance. Users taking advantage of the proportional weight I/O controller will also see a boost in performance. In preparation for the block layer updates, third party file system developers need to ensure that data ordering surrounding journal commits are handled within the file system itself, since the block layer will no longer provide this functionality. These future block layer improvements will change some kernel interfaces such that symbols which are not on the kABI whitelist shall be modified. This may result in the need to recompile third party file system or storage drivers. Systemtap Tracepoints The following 3 virtual memory tracepoints are deprecated in Red Hat Enterprise Linux 6 trace_mm_background_writeout(unsigned long written) trace_mm_olddata_writeout(unsigned long written) trace_mm_balancedirty_writeout(unsigned long written) 10.1. Technology Previews Remote Audit Logging The audit package contains the user space utilities for storing and searching the audit records generated by the audit subsystem in the Linux 2.6 kernel. Within the audispd-plugins subpackage is a utility that allows for the transmission of audit events to a remote aggregating machine. This remote audit logging application, audisp-remote, is considered a Technology Preview in Red Hat Enterprise Linux 6. Linux (NameSpace) Container [LXC] Linux (NameSpace) Containers [LXC] is a Technology Preview feature in Red Hat Enterprise Linux 6 Beta that provides isolation of resources assigned to one or more processes. A process is assigned a separate user permission, networking, filesystem name space from its parent. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/kernel |
Chapter 2. Installing the Self-hosted Engine Deployment Host | Chapter 2. Installing the Self-hosted Engine Deployment Host A self-hosted engine can be deployed from a Red Hat Virtualization Host or a Red Hat Enterprise Linux host . Important If you plan to use bonded interfaces for high availability or VLANs to separate different types of traffic (for example, for storage or management connections), you should configure them on the host before beginning the self-hosted engine deployment. See Networking Recommendations in the Planning and Prerequisites Guide . 2.1. Installing Red Hat Virtualization Hosts Red Hat Virtualization Host (RHVH) is a minimal operating system based on Red Hat Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a Red Hat Virtualization environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See Running Cockpit for the minimum browser requirements. RHVH supports NIST 800-53 partitioning requirements to improve security. RHVH uses a NIST 800-53 partition layout by default. The host must meet the minimum host requirements . Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Procedure Go to the Get Started with Red Hat Virtualization on the Red Hat Customer Portal and log in. Click Download Latest to access the product download page. Choose the appropriate Hypervisor Image for RHV from the list and click Download Now . Start the machine on which you are installing RHVH, booting from the prepared installation media. From the boot menu, select Install RHVH 4.4 and press Enter . Note You can also press the Tab key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing the Enter key. Press the Esc key to clear any changes to the kernel parameters and return to the boot menu. Select a language, and click Continue . Select a keyboard layout from the Keyboard Layout screen and click Done . Select the device on which to install RHVH from the Installation Destination screen. Optionally, enable encryption. Click Done . Important Use the Automatically configure partitioning option. Select a time zone from the Time & Date screen and click Done . Select a network from the Network & Host Name screen and click Configure... to configure the connection details. Note To use the connection every time the system boots, select the Connect automatically with priority check box. For more information, see Configuring network and host name options in the Red Hat Enterprise Linux 8 Installation Guide . Enter a host name in the Host Name field, and click Done . Optional: Configure Security Policy and Kdump . See Customizing your RHEL installation using the GUI in Performing a standard RHEL installation for Red Hat Enterprise Linux 8 for more information on each of the sections in the Installation Summary screen. Click Begin Installation . Set a root password and, optionally, create an additional user while RHVH installs. Warning Do not create untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities. Click Reboot to complete the installation. Note When RHVH restarts, nodectl check performs a health check on the host and displays the result when you log in on the command line. The message node status: OK or node status: DEGRADED indicates the health status. Run nodectl check to get more information. Note If necessary, you can prevent kernel modules from loading automatically . 2.1.1. Enabling the Red Hat Virtualization Host Repository Register the system to receive updates. Red Hat Virtualization Host only requires one repository. This section provides instructions for registering RHVH with the Content Delivery Network , or with Red Hat Satellite 6 . Registering RHVH with the Content Delivery Network Enable the Red Hat Virtualization Host 8 repository to allow later updates to the Red Hat Virtualization Host: # subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms Registering RHVH with Red Hat Satellite 6 Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Click Terminal . Register RHVH with Red Hat Satellite 6: Note You can also configure virtual machine subscriptions in Red Hat Satellite using virt-who. See Using virt-who to manage host-based subscriptions . 2.2. Installing Red Hat Enterprise Linux hosts A Red Hat Enterprise Linux host is based on a standard basic installation of Red Hat Enterprise Linux 8 on a physical server, with the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions attached. For detailed installation instructions, see the Performing a standard RHEL installation . The host must meet the minimum host requirements . Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Important Virtualization must be enabled in your host's BIOS settings. For information on changing your host's BIOS settings, refer to your host's hardware documentation. Important Do not install third-party watchdogs on Red Hat Enterprise Linux hosts. They can interfere with the watchdog daemon provided by VDSM. 2.2.1. Enabling the Red Hat Enterprise Linux host Repositories To use a Red Hat Enterprise Linux machine as a host, you must register the system with the Content Delivery Network, attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions, and enable the host repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and record the pool IDs: # subscription-manager list --available Use the pool IDs to attach the subscriptions to the system: # subscription-manager attach --pool= poolid Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=advanced-virt-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Reset the virt module: # dnf module reset virt Note If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact. You can see the value of the stream by entering: Enable the virt module in the Advanced Virtualization stream with the following command: For RHV 4.4.2: # dnf module enable virt:8.2 For RHV 4.4.3 to 4.4.5: # dnf module enable virt:8.3 For RHV 4.4.6 to 4.4.10: # dnf module enable virt:av For RHV 4.4 and later: Note Starting with RHEL 8.6 the Advanced virtualization packages will use the standard virt:rhel module. For RHEL 8.4 and 8.5, only one Advanced Virtualization stream is used, rhel:av . Ensure that all packages currently installed are up to date: # dnf upgrade --nobest Reboot the machine. Note If necessary, you can prevent kernel modules from loading automatically . Although the existing storage domains will be migrated from the standalone Manager, you must prepare additional storage for a self-hosted engine storage domain that is dedicated to the Manager virtual machine. | [
"subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms",
"rpm -Uvh http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm # subscription-manager register --org=\" org_id \" # subscription-manager list --available # subscription-manager attach --pool= pool_id # subscription-manager repos --disable='*' --enable=rhvh-4-for-rhel-8-x86_64-rpms",
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= poolid",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=advanced-virt-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module reset virt",
"dnf module list virt",
"dnf module enable virt:8.2",
"dnf module enable virt:8.3",
"dnf module enable virt:av",
"dnf module enable virt:rhel",
"dnf upgrade --nobest"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/Installing_the_self-hosted_engine_deployment_host_migrating_to_SHE |
Part IV. Technology Previews | Part IV. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 7.6. For information on Red Hat scope of support for Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/technology-previews |
function::task_execname | function::task_execname Name function::task_execname - The name of the task Synopsis Arguments task task_struct pointer Description Return the name of the given task. | [
"task_execname:string(task:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-execname |
C.2. Selection Criteria Operators | C.2. Selection Criteria Operators Table C.2, "Selection Criteria Grouping Operators" describes the selection criteria grouping operators. Table C.2. Selection Criteria Grouping Operators Grouping Operator Description ( ) Used for grouping statements [ ] Used to group strings into a string list (exact match) { } Used to group strings into a string list (subset match) Table C.3, "Selection Criteria Comparison Operators" describes the selection criteria comparison operators and the field types with which they can be used. Table C.3. Selection Criteria Comparison Operators Comparison Operator Description Field Type =~ Matching regular expression regex !~ Not matching regular expression. regex = Equal to number, size, percent, string, string list, time != Not equal to number, size, percent, string, string list, time >= Greater than or equal to number, size, percent, time > Greater than number, size, percent, time <= Less than or equal to number, size, percent, time < Less than number, size, percent, time since Since specified time (same as >=) time after After specified time (same as >) time until Until specified time (same as <=) time before Before specified time (same as <) time Table C.4, "Selection Criteria Logical and Grouping Operators" describes the selection criteria logical and grouping operators. Table C.4. Selection Criteria Logical and Grouping Operators Logical and Grouping Operator Description && All fields must match , All fields must match (same as &&) || At least one field must match # At least one field must match (same as ||) ! Logical negation ( Left parenthesis (grouping operator) ) Right parenthesis (grouping operator) [ List start (grouping operator) ] List end (grouping operator) { List subset start (grouping operator) } List subset end (grouping operator) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/selection_operators |
16.10. virt-win-reg: Reading and Editing the Windows Registry | 16.10. virt-win-reg: Reading and Editing the Windows Registry 16.10.1. Introduction virt-win-reg is a tool that manipulates the Registry in Windows guest virtual machines. It can be used to read out registry keys. You can also use it to make changes to the Registry, but you must never try to do this for live/running guest virtual machines, as it will result in disk corruption. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virt-win-reg |
Chapter 85. Disruptor Component | Chapter 85. Disruptor Component Available as of Camel version 2.12 The disruptor: component provides asynchronous SEDA behavior much as the standard SEDA Component, but utilizes a Disruptor instead of a BlockingQueue utilized by the standard SEDA . Alternatively, a disruptor-vm: endpoint is supported by this component, providing an alternative to the standard VM . As with the SEDA component, buffers of the disruptor: endpoints are only visible within a single CamelContext and no support is provided for persistence or recovery. The buffers of the disruptor-vm: endpoints also provides support for communication across CamelContexts instances so you can use this mechanism to communicate across web applications (provided that camel-disruptor.jar is on the system/boot classpath). The main advantage of choosing to use the Disruptor Component over the SEDA or the VM Component is performance in use cases where there is high contention between producer(s) and/or multicasted or concurrent Consumers. In those cases, significant increases of throughput and reduction of latency has been observed. Performance in scenarios without contention is comparable to the SEDA and VM Components. The Disruptor is implemented with the intention of mimicing the behaviour and options of the SEDA and VM Components as much as possible. The main differences with the them are the following: The buffer used is always bounded in size (default 1024 exchanges). As a the buffer is always bouded, the default behaviour for the Disruptor is to block while the buffer is full instead of throwing an exception. This default behaviour may be configured on the component (see options). The Disruptor enpoints don't implement the BrowsableEndpoint interface. As such, the exchanges currently in the Disruptor can't be retrieved, only the amount of exchanges. The Disruptor requires its consumers (multicasted or otherwise) to be statically configured. Adding or removing consumers on the fly requires complete flushing of all pending exchanges in the Disruptor. As a result of the reconfiguration: Data sent over a Disruptor is directly processed and 'gone' if there is at least one consumer, late joiners only get new exchanges published after they've joined. The pollTimeout option is not supported by the Disruptor Component. When a producer blocks on a full Disruptor, it does not respond to thread interrupts. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-disruptor</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 85.1. URI format disruptor:someName[?options] or disruptor-vm:someName[?options] Where someName can be any string that uniquely identifies the endpoint within the current CamelContext (or across contexts in case of disruptor-vm: ). You can append query options to the URI in the following format: ?option=value&option=value&... 85.2. Options All the following options are valid for both the disruptor: and disruptor-vm: components. The Disruptor component supports 8 options, which are listed below. Name Description Default Type defaultConcurrent Consumers (consumer) To configure the default number of concurrent consumers 1 int defaultMultiple Consumers (consumer) To configure the default value for multiple consumers false boolean defaultProducerType (producer) To configure the default value for DisruptorProducerType The default value is Multi. Multi DisruptorProducerType defaultWaitStrategy (consumer) To configure the default value for DisruptorWaitStrategy The default value is Blocking. Blocking DisruptorWaitStrategy defaultBlockWhenFull (producer) To configure the default value for block when full The default value is true. true boolean queueSize (common) Deprecated To configure the ring buffer size int bufferSize (common) To configure the ring buffer size 1024 int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Disruptor endpoint is configured using URI syntax: with the following path and query parameters: 85.2.1. Path Parameters (1 parameters): Name Description Default Type name Required Name of queue String 85.2.2. Query Parameters (12 parameters): Name Description Default Type size (common) The maximum capacity of the Disruptors ringbuffer Will be effectively increased to the nearest power of two. Notice: Mind if you use this option, then its the first endpoint being created with the queue name, that determines the size. To make sure all endpoints use same size, then configure the size option on all of them, or the first endpoint being created. 1024 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Number of concurrent threads processing exchanges. 1 int multipleConsumers (consumer) Specifies whether multiple consumers are allowed. If enabled, you can use Disruptor for Publish-Subscribe messaging. That is, you can send a message to the queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint. false boolean waitStrategy (consumer) Defines the strategy used by consumer threads to wait on new exchanges to be published. The options allowed are:Blocking, Sleeping, BusySpin and Yielding. Blocking DisruptorWaitStrategy exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern blockWhenFull (producer) Whether a thread that sends messages to a full Disruptor will block until the ringbuffer's capacity is no longer exhausted. By default, the calling thread will block and wait until the message can be accepted. By disabling this option, an exception will be thrown stating that the queue is full. false boolean producerType (producer) Defines the producers allowed on the Disruptor. The options allowed are: Multi to allow multiple producers and Single to enable certain optimizations only allowed when one concurrent producer (on one thread or otherwise synchronized) is active. Multi DisruptorProducerType timeout (producer) Timeout (in milliseconds) before a producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value. 30000 long waitForTaskToComplete (producer) Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based. IfReplyExpected WaitForTaskToComplete synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 85.3. Spring Boot Auto-Configuration The component supports 18 options, which are listed below. Name Description Default Type camel.component.disruptor-vm.buffer-size To configure the ring buffer size 1024 Integer camel.component.disruptor-vm.default-block-when-full To configure the default value for block when full The default value is true. true Boolean camel.component.disruptor-vm.default-concurrent-consumers To configure the default number of concurrent consumers 1 Integer camel.component.disruptor-vm.default-multiple-consumers To configure the default value for multiple consumers false Boolean camel.component.disruptor-vm.default-producer-type To configure the default value for DisruptorProducerType The default value is Multi. DisruptorProducerType camel.component.disruptor-vm.default-wait-strategy To configure the default value for DisruptorWaitStrategy The default value is Blocking. DisruptorWaitStrategy camel.component.disruptor-vm.enabled Enable disruptor-vm component true Boolean camel.component.disruptor-vm.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.disruptor.buffer-size To configure the ring buffer size 1024 Integer camel.component.disruptor.default-block-when-full To configure the default value for block when full The default value is true. true Boolean camel.component.disruptor.default-concurrent-consumers To configure the default number of concurrent consumers 1 Integer camel.component.disruptor.default-multiple-consumers To configure the default value for multiple consumers false Boolean camel.component.disruptor.default-producer-type To configure the default value for DisruptorProducerType The default value is Multi. DisruptorProducerType camel.component.disruptor.default-wait-strategy To configure the default value for DisruptorWaitStrategy The default value is Blocking. DisruptorWaitStrategy camel.component.disruptor.enabled Enable disruptor component true Boolean camel.component.disruptor.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.disruptor-vm.queue-size To configure the ring buffer size Integer camel.component.disruptor.queue-size To configure the ring buffer size Integer 85.4. Wait strategies The wait strategy effects the type of waiting performed by the consumer threads that are currently waiting for the exchange to be published. The following strategies can be chosen: Name Description Advice Blocking Blocking strategy that uses a lock and condition variable for Consumers waiting on a barrier. This strategy can be used when throughput and low-latency are not as important as CPU resource. Sleeping Sleeping strategy that initially spins, then uses a Thread.yield(), and eventually for the minimum number of nanos the OS and JVM will allow while the Consumers are waiting on a barrier. This strategy is a good compromise between performance and CPU resource. Latency spikes can occur after quiet periods. BusySpin Busy Spin strategy that uses a busy spin loop for Consumers waiting on a barrier. This strategy will use CPU resource to avoid syscalls which can introduce latency jitter. It is best used when threads can be bound to specific CPU cores. Yielding Yielding strategy that uses a Thread.yield() for Consumers waiting on a barrier after an initially spinning. This strategy is a good compromise between performance and CPU resource without incurring significant latency spikes. 85.5. Use of Request Reply The Disruptor component supports using Request Reply , where the caller will wait for the Async route to complete. For instance: from("mina:tcp://0.0.0.0:9876?textline=true&sync=true").to("disruptor:input"); from("disruptor:input").to("bean:processInput").to("bean:createResponse"); In the route above, we have a TCP listener on port 9876 that accepts incoming requests. The request is routed to the disruptor:input buffer. As it is a Request Reply message, we wait for the response. When the consumer on the disruptor:input buffer is complete, it copies the response to the original message response. 85.6. Concurrent consumers By default, the Disruptor endpoint uses a single consumer thread, but you can configure it to use concurrent consumer threads. So instead of thread pools you can use: from("disruptor:stageName?concurrentConsumers=5").process(...) As for the difference between the two, note a thread pool can increase/shrink dynamically at runtime depending on load, whereas the number of concurrent consumers is always fixed and supported by the Disruptor internally so performance will be higher. 85.7. Thread pools Be aware that adding a thread pool to a Disruptor endpoint by doing something like: from("disruptor:stageName").thread(5).process(...) Can wind up with adding a normal BlockingQueue to be used in conjunction with the Disruptor, effectively negating part of the performance gains achieved by using the Disruptor. Instead, it is advices to directly configure number of threads that process messages on a Disruptor endpoint using the concurrentConsumers option. 85.8. Sample In the route below we use the Disruptor to send the request to this async queue to be able to send a fire-and-forget message for further processing in another thread, and return a constant reply in this thread to the original caller. public void configure() throws Exception { from("direct:start") // send it to the disruptor that is async .to("disruptor:") // return a constant response .transform(constant("OK")); from("disruptor:").to("mock:result"); } Here we send a Hello World message and expects the reply to be OK. Object out = template.requestBody("direct:start", "Hello World"); assertEquals("OK", out); The "Hello World" message will be consumed from the Disruptor from another thread for further processing. Since this is from a unit test, it will be sent to a mock endpoint where we can do assertions in the unit test. 85.9. Using multipleConsumers In this example we have defined two consumers and registered them as spring beans. <!-- define the consumers as spring beans --> <bean id="consumer1" class="org.apache.camel.spring.example.FooEventConsumer"/> <bean id="consumer2" class="org.apache.camel.spring.example.AnotherFooEventConsumer"/> <camelContext xmlns="http://camel.apache.org/schema/spring"> <!-- define a shared endpoint which the consumers can refer to instead of using url --> <endpoint id="foo" uri="disruptor:foo?multipleConsumers=true"/> </camelContext> Since we have specified multipleConsumers=true on the Disruptor foo endpoint we can have those two or more consumers receive their own copy of the message as a kind of pub-sub style messaging. As the beans are part of an unit test they simply send the message to a mock endpoint, but notice how we can use @Consume to consume from the Disruptor. public class FooEventConsumer { @EndpointInject(uri = "mock:result") private ProducerTemplate destination; @Consume(ref = "foo") public void doSomething(String body) { destination.sendBody("foo" + body); } } 85.10. Extracting disruptor information If needed, information such as buffer size, etc. can be obtained without using JMX in this fashion: DisruptorEndpoint disruptor = context.getEndpoint("disruptor:xxxx"); int size = disruptor.getBufferSize(); | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-disruptor</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"disruptor:someName[?options]",
"disruptor-vm:someName[?options]",
"?option=value&option=value&...",
"disruptor:name",
"from(\"mina:tcp://0.0.0.0:9876?textline=true&sync=true\").to(\"disruptor:input\"); from(\"disruptor:input\").to(\"bean:processInput\").to(\"bean:createResponse\");",
"from(\"disruptor:stageName?concurrentConsumers=5\").process(...)",
"from(\"disruptor:stageName\").thread(5).process(...)",
"public void configure() throws Exception { from(\"direct:start\") // send it to the disruptor that is async .to(\"disruptor:next\") // return a constant response .transform(constant(\"OK\")); from(\"disruptor:next\").to(\"mock:result\"); }",
"Object out = template.requestBody(\"direct:start\", \"Hello World\"); assertEquals(\"OK\", out);",
"<!-- define the consumers as spring beans --> <bean id=\"consumer1\" class=\"org.apache.camel.spring.example.FooEventConsumer\"/> <bean id=\"consumer2\" class=\"org.apache.camel.spring.example.AnotherFooEventConsumer\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <!-- define a shared endpoint which the consumers can refer to instead of using url --> <endpoint id=\"foo\" uri=\"disruptor:foo?multipleConsumers=true\"/> </camelContext>",
"public class FooEventConsumer { @EndpointInject(uri = \"mock:result\") private ProducerTemplate destination; @Consume(ref = \"foo\") public void doSomething(String body) { destination.sendBody(\"foo\" + body); } }",
"DisruptorEndpoint disruptor = context.getEndpoint(\"disruptor:xxxx\"); int size = disruptor.getBufferSize();"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/disruptor-component |
Installing on IBM Cloud Bare Metal (Classic) | Installing on IBM Cloud Bare Metal (Classic) OpenShift Container Platform 4.12 Installing OpenShift Container Platform on IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team | [
"<cluster_name>.<domain>",
"test-cluster.example.com",
"ipmi://<IP>:<port>?privilegelevel=OPERATOR",
"ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>",
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt kni",
"sudo systemctl start firewalld",
"sudo systemctl enable firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"PRVN_HOST_ID=<ID>",
"ibmcloud sl hardware list",
"PUBLICSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRIVSUBNETID=<ID>",
"ibmcloud sl subnet list",
"PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)",
"PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)",
"PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR",
"PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)",
"PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)",
"PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)",
"PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR",
"PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)",
"sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2",
"vim pull-secret.txt",
"sudo dnf install dnsmasq",
"sudo vi /etc/dnsmasq.conf",
"interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr",
"ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r",
"ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r",
"ibmcloud sl hardware list",
"ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null",
"\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"",
"sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile",
"00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1",
"sudo systemctl start dnsmasq",
"sudo systemctl enable dnsmasq",
"sudo systemctl status dnsmasq",
"● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k",
"sudo firewall-cmd --add-port 53/udp --permanent",
"sudo firewall-cmd --add-port 67/udp --permanent",
"sudo firewall-cmd --change-zone=provisioning --zone=external --permanent",
"sudo firewall-cmd --reload",
"export VERSION=stable-4.12",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installing_on_ibm_cloud_bare_metal_classic/index |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/getting_started_with_red_hat_build_of_apache_camel_for_quarkus/pr01 |
Part V. Deprecated Functionality | Part V. Deprecated Functionality This part provides an overview of functionality that has been deprecated in all minor releases up to Red Hat Enterprise Linux 7.4. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/part-Red_Hat_Enterprise_Linux-7.4_Release_Notes-Deprecated_Functionality |
Chapter 9. Error handling | Chapter 9. Error handling Errors in Red Hat build of Rhea can be handled by intercepting named events corresponding to AMQP protocol or connection errors. 9.1. Handling connection and protocol errors You can handle protocol-level errors by intercepting the following events: connection_error session_error sender_error receiver_error protocol_error error These events are fired whenever there is an error condition with the specific object that is in the event. After calling the error handler, the corresponding <object> _close handler is also called. The event argument has an error attribute for accessing the error object. Example: Handling errors container.on("error", function (event) { console.log("An error!", event.error); }); Note Because the close handlers are called in the event of any error, only the error itself needs to be handled within the error handler. Resource cleanup can be managed by close handlers. If there is no error handling that is specific to a particular object, it is typical to handle the general error event and not have a more specific handler. Note When reconnect is enabled and the remote server closes a connection with the amqp:connection:forced condition, the client does not treat it as an error and thus does not fire the connection_error event. The client instead begins the reconnection process. | [
"container.on(\"error\", function (event) { console.log(\"An error!\", event.error); });"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_rhea/3.0/html/using_rhea/error_handling |
Chapter 23. Cron | Chapter 23. Cron Only consumer is supported The Cron component is a generic interface component that allows triggering events at specific time interval specified using the Unix cron syntax (e.g. 0/2 * * * * ? to trigger an event every two seconds). Being an interface component, the Cron component does not contain a default implementation, instead it requires that the users plug the implementation of their choice. The following standard Camel components support the Cron endpoints: Camel-quartz Camel-spring The Cron component is also supported in Camel K , which can use the Kubernetes scheduler to trigger the routes when required by the cron expression. Camel K does not require additional libraries to be plugged when using cron expressions compatible with Kubernetes cron syntax. 23.1. Dependencies When using cron with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cron-starter</artifactId> </dependency> Additional libraries may be needed in order to plug a specific implementation. 23.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 23.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 23.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 23.3. Component Options The Cron component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean cronService (advanced) The id of the CamelCronService to use when multiple implementations are provided. String 23.4. Endpoint Options The Cron endpoint is configured using URI syntax: with the following path and query parameters: 23.4.1. Path Parameters (1 parameters) Name Description Default Type name (consumer) Required The name of the cron trigger. String 23.4.2. Query Parameters (4 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean schedule (consumer) Required A cron expression that will be used to generate events. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern 23.5. Usage The component can be used to trigger events at specified times, as in the following example: from("cron:tab?schedule=0/1+*+*+*+*+?") .setBody().constant("event") .log("USD{body}"); The schedule expression 0/3+10+ * +? can be also written as 0/3 10 * * * ? and triggers an event every three seconds only in the tenth minute of each hour. Parts in the schedule expression means (in order): Seconds (optional) Minutes Hours Day of month Month Day of week Year (optional) Schedule expressions can be made of 5 to 7 parts. When expressions are composed of 6 parts, the first items is the "seconds" part (and year is considered missing). Other valid examples of schedule expressions are: 0/2 * * * ? (5 parts, an event every two minutes) 0 0/2 * * * MON-FRI 2030 (7 parts, an event every two minutes only in year 2030) Routes can also be written using the XML DSL. <route> <from uri="cron:tab?schedule=0/1+*+*+*+*+?"/> <setBody> <constant>event</constant> </setBody> <to uri="log:info"/> </route> 23.6. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.cron.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cron.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cron.cron-service The id of the CamelCronService to use when multiple implementations are provided. String camel.component.cron.enabled Whether to enable auto configuration of the cron component. This is enabled by default. Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cron-starter</artifactId> </dependency>",
"cron:name",
"from(\"cron:tab?schedule=0/1+*+*+*+*+?\") .setBody().constant(\"event\") .log(\"USD{body}\");",
"<route> <from uri=\"cron:tab?schedule=0/1+*+*+*+*+?\"/> <setBody> <constant>event</constant> </setBody> <to uri=\"log:info\"/> </route>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cron-component-starter |
Chapter 1. OpenShift Container Platform storage overview | Chapter 1. OpenShift Container Platform storage overview OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. 1.1. Glossary of common terms for OpenShift Container Platform storage This glossary defines common terms that are used in the storage content. Access modes Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes: ReadWriteOnce (RWO) ReadOnlyMany (ROX) ReadWriteMany (RWX) ReadWriteOncePod (RWOP) Cinder The Block Storage service for Red Hat OpenStack Platform (RHOSP) which manages the administration, security, and scheduling of all volumes. Config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container Storage Interface (CSI) An API specification for the management of container storage across different container orchestration (CO) systems. Dynamic Provisioning The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage. Ephemeral storage Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Fiber channel A networking technology that is used to transfer data among data centers, computer servers, switches and storage. FlexVolume FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plugin path on each node and in some cases the control plane nodes. fsGroup The fsGroup defines a file system group ID of a pod. iSCSI Internet Small Computer Systems Interface (iSCSI) is an Internet Protocol-based storage networking standard for linking data storage facilities. An iSCSI volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod. hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. KMS key The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data. Local volumes A local volume represents a mounted local storage device such as a disk, partition or directory. NFS A Network File System (NFS) that allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. OpenShift Data Foundation A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds Persistent storage Pods and containers can require permanent storage for their operation. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volumes (PV) OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volume claims (PVCs) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Reclaim policy A policy that tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Role-based access control (RBAC) Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. Stateless applications A stateless application is an application program that does not save client data generated in one session for use in the session with that client. Stateful applications A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the Statefulset object in OpenShift Container Platform to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. Static provisioning A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption. Storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Storage class A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators. VMware vSphere's Virtual Machine Disk (VMDK) volumes Virtual Machine Disk (VMDK) is a file format that describes containers for virtual hard disk drives that is used in virtual machines. 1.2. Storage types OpenShift Container Platform storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.2.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage . 1.2.2. Persistent storage Stateful applications deployed in containers require persistent storage. OpenShift Container Platform uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage . 1.3. Container Storage Interface (CSI) CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI) . 1.4. Dynamic Provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/storage/storage-overview |
Chapter 42. Kernel | Chapter 42. Kernel Heterogeneous memory management included as a Technology Preview Red Hat Enterprise Linux 7.3 offers the heterogeneous memory management (HMM) feature as a Technology Preview. This feature has been added to the kernel as a helper layer for devices that want to mirror a process address space into their own memory management unit (MMU). Thus a non-CPU device processor is able to read system memory using the unified system address space. To enable this feature, add experimental_hmm=enable to the kernel command line. (BZ#1230959) User namespace This feature provides additional security to servers running Linux containers by providing better isolation between the host and the containers. Administrators of a container are no longer able to perform administrative operations on the host, which increases security. (BZ#1138782) libocrdma RoCE support on Oce141xx cards As a Technology Preview, the ocrdma module and the libocrdma package support the Remote Direct Memory Access over Converged Ethernet (RoCE) functionality on all network adapters in the Oce141xx family. (BZ#1334675) No-IOMMU mode for VFIO drivers As a Technology Preview, this update adds No-IOMMU mode for virtual function I/O (VFIO) drivers. The No-IOMMU mode provides the user with full user-space I/O (UIO) access to a direct memory access (DMA)-capable device without a I/O memory management unit (IOMMU). Note that in addition to not being supported, using this mode is not secure due to the lack of I/O management provided by IOMMU. (BZ# 1299662 ) criu rebased to version 2.3 Red Hat Enterprise Linux 7.2 introduced the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space (CRIU) , which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state. Note that the criu tool depends on Protocol Buffers , a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, were also introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. With Red Hat Enterprise Linux 7.3, the criu packages have been upgraded to upstream version 2.3, which provides a number of bug fixes and enhancements over the version. Notably, criu is now available also on Red Hat Enterprise Linux for POWER, little endian. Additionally, criu can now be used for following applications running in a Red Hat Enterprise Linux 7 runc container: vsftpd apache httpd sendmail postgresql mongodb mariadb mysql tomcat dnsmasq (BZ# 1296578 ) The ibmvnic Device Driver has been added The ibmvnic Device Driver has been introduced as a Technology Preview in Red Hat Enterprise Linux 7.3 for IBM POWER architectures. vNIC (Virtual Network Interface Controller) is a new PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. (BZ#947163) Kexec as a Technology Preview The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot. (BZ#1460849) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/technology_previews_kernel |
Chapter 12. Multiple branches in Business Central | Chapter 12. Multiple branches in Business Central Multiple branches support in Business Central provides the ability to create a new branch based on an existing one, including all of its assets. All new, imported, and sample projects open in the default master branch. You can create as many branches as you need and can work on multiple branches interchangeably without impacting the original project on the master branch. Red Hat Decision Manager 7.13 includes support for persisting branches, which means that Business Central remembers the last branch used and will open in that branch when you log back in. 12.1. Creating branches You can create new branches in Business Central and name them whatever you like. Initially, you will only have the default master branch. When you create a new branch for a project, you are making a copy of the selected branch. You can make changes to the project on the new branch without impacting the original master branch version. Procedure In Business Central, go to Menu Design Projects . Click the project to create the new branch, for example the Mortgages sample project. Click master Add Branch . Figure 12.1. Create the new branch menu Type testBranch1 in the Name field and select master from the Add Branch window. Where testBranch1 is any name that you want to name the new branch. Select the branch that will be the base for the new branch from the Add Branch window. This can be any existing branch. Click Add . Figure 12.2. Add the new branch window After adding the new branch, you will be redirected to it, and it will contain all of the assets that you had in your project in the master branch. 12.2. Selecting branches You can switch between branches to make modifications to project assets and test the revised functionality. Procedure Click the current branch name and select the desired project branch from the drop-down list. Figure 12.3. Select a branch menu After selecting the branch, you are redirected to that branch containing the project and all of the assets that you had defined. 12.3. Deleting branches You can delete any branch except for the master branch. Business Central does not allow you to delete the master branch to avoid corrupting your environment. You must be in any branch other than master for the following procedure to work. Procedure Click in the upper-right corner of the screen and select Delete Branch . Figure 12.4. Delete a branch In the Delete Branch window, enter the name of the branch you want to delete. Click Delete Branch . The branch is deleted and the project branch switches to the master branch. 12.4. Building and deploying projects After your project is developed, you can build the project from the specified branch in Business Central and deploy it to the configured KIE Server. Procedure In Business Central, go to Menu Design Projects and click the project name. In the upper-right corner, click Deploy to build the project and deploy it to KIE Server. Note You can also select the Build & Install option to build the project and publish the KJAR file to the configured Maven repository without deploying to a KIE Server. In a development environment, you can click Deploy to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The time you deploy or redeploy the built KJAR, the deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option is disabled and you can click Deploy only to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server. To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production . To configure the deployment behavior for a corresponding project in Business Central, go to project Settings General Settings Version and toggle the Development Mode option. By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode. If the build fails, address any problems described in the Alerts panel at the bottom of the screen. To review project deployment details, click View deployment details in the deployment banner at the top of the screen or in the Deploy drop-down menu. This option directs you to the Menu Deploy Execution Servers page. For more information about project deployment options, see Packaging and deploying an Red Hat Decision Manager project . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/multiple-branches-con |
Chapter 9. Consoles and logging during installation | Chapter 9. Consoles and logging during installation The Red Hat Enterprise Linux installer uses the tmux terminal multiplexer to display and control several windows in addition to the main interface. Each of these windows serve a different purpose; they display several different logs, which can be used to troubleshoot issues during the installation process. One of the windows provides an interactive shell prompt with root privileges, unless this prompt was specifically disabled using a boot option or a Kickstart command. The terminal multiplexer is running in virtual console 1. To switch from the actual installation environment to tmux , press Ctrl + Alt + F1 . To go back to the main installation interface which runs in virtual console 6, press Ctrl + Alt + F6 . During the text mode installation, start in virtual console 1 ( tmux ), and switching to console 6 will open a shell prompt instead of a graphical interface. The console running tmux has five available windows; their contents are described in the following table, along with keyboard shortcuts. Note that the keyboard shortcuts are two-part: first press Ctrl + b , then release both keys, and press the number key for the window you want to use. You can also use Ctrl + b n , Alt+ Tab , and Ctrl + b p to switch to the or tmux window, respectively. Table 9.1. Available tmux windows Shortcut Contents Ctrl + b 1 Main installation program window. Contains text-based prompts (during text mode installation or if you use VNC direct mode), and also some debugging information. Ctrl + b 2 Interactive shell prompt with root privileges. Ctrl + b 3 Installation log; displays messages stored in /tmp/anaconda.log . Ctrl + b 4 Storage log; displays messages related to storage devices and configuration, stored in /tmp/storage.log . Ctrl + b 5 Program log; displays messages from utilities executed during the installation process, stored in /tmp/program.log . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/consoles-logging-during-install_rhel-installer |
4.5. Migrating with virt-manager | 4.5. Migrating with virt-manager This section covers migrating a KVM guest virtual machine with virt-manager from one host physical machine to another. Open virt-manager Open virt-manager . Choose Applications System Tools Virtual Machine Manager from the main menu bar to launch virt-manager . Figure 4.1. Virt-Manager main menu Connect to the target host physical machine Connect to the target host physical machine by clicking on the File menu, then click Add Connection . Figure 4.2. Open Add Connection window Add connection The Add Connection window appears. Figure 4.3. Adding a connection to the target host physical machine Enter the following details: Hypervisor : Select QEMU/KVM . Method : Select the connection method. Username : Enter the user name for the remote host physical machine. Hostname : Enter the host name for the remote host physical machine. Click the Connect button. An SSH connection is used in this example, so the specified user's password must be entered in the step. Figure 4.4. Enter password Migrate guest virtual machines Open the list of guests inside the source host physical machine (click the small triangle on the left of the host name) and right click on the guest that is to be migrated ( guest1-rhel6-64 in this example) and click Migrate . Figure 4.5. Choosing the guest to be migrated In the New Host field, use the drop-down list to select the host physical machine you wish to migrate the guest virtual machine to and click Migrate . Figure 4.6. Choosing the destination host physical machine and starting the migration process A progress window will appear. Figure 4.7. Progress window virt-manager now displays the newly migrated guest virtual machine running in the destination host. The guest virtual machine that was running in the source host physical machine is now listed inthe Shutoff state. Figure 4.8. Migrated guest virtual machine running in the destination host physical machine Optional - View the storage details for the host physical machine In the Edit menu, click Connection Details , the Connection Details window appears. Click the Storage tab. The iSCSI target details for the destination host physical machine is shown. Note that the migrated guest virtual machine is listed as using the storage Figure 4.9. Storage details This host was defined by the following XML configuration: <pool type='iscsi'> <name>iscsirhel6guest</name> <source> <host name='virtlab22.example.com.'/> <device path='iqn.2001-05.com.iscsivendor:0-8a0906-fbab74a06-a700000017a4cc89-rhevh'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> ... Figure 4.10. XML configuration for the destination host physical machine | [
"<pool type='iscsi'> <name>iscsirhel6guest</name> <source> <host name='virtlab22.example.com.'/> <device path='iqn.2001-05.com.iscsivendor:0-8a0906-fbab74a06-a700000017a4cc89-rhevh'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-kvm_live_migration-migrating_with_virt_manager |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/making-open-source-more-inclusive |
31.8. Specific Kernel Module Capabilities | 31.8. Specific Kernel Module Capabilities This section explains how to enable specific kernel capabilities using various kernel modules. 31.8.1. Using Channel Bonding Red Hat Enterprise Linux allows administrators to bind NICs together into a single channel using the bonding kernel module and a special network interface, called a channel bonding interface . Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. To channel bond multiple network interfaces, the administrator must perform the following steps: Configure a channel bonding interface as outlined in Section 11.2.4, "Channel Bonding Interfaces" . To enhance performance, adjust available module options to ascertain what combination works best. Pay particular attention to the miimon or arp_interval and the arp_ip_target parameters. See Section 31.8.1.1, "Bonding Module Directives" for a list of available options and how to quickly determine the best ones for your bonded interface. 31.8.1.1. Bonding Module Directives It is a good idea to test which channel bonding module parameters work best for your bonded interfaces before adding them to the BONDING_OPTS=" <bonding parameters> " directive in your bonding interface configuration file ( ifcfg-bond0 for example). Parameters to bonded interfaces can be configured without unloading (and reloading) the bonding module by manipulating files in the sysfs file system. sysfs is a virtual file system that represents kernel objects as directories, files and symbolic links. sysfs can be used to query for information about kernel objects, and can also manipulate those objects through the use of normal file system commands. The sysfs virtual file system has a line in /etc/fstab , and is mounted under the /sys/ directory. All bonding interfaces can be configured dynamically by interacting with and manipulating files under the /sys/class/net/ directory. In order to determine the best parameters for your bonding interface, create a channel bonding interface file such as ifcfg-bond0 by following the instructions in Section 11.2.4, "Channel Bonding Interfaces" . Insert the SLAVE=yes and MASTER=bond0 directives in the configuration files for each interface bonded to bond0. Once this is completed, you can proceed to testing the parameters. First, bring up the bond you created by running ifconfig bond <N> up as root: If you have correctly created the ifcfg-bond0 bonding interface file, you will be able to see bond0 listed in the output of running ifconfig (without any options): To view all existing bonds, even if they are not up, run: You can configure each bond individually by manipulating the files located in the /sys/class/net/bond <N> /bonding/ directory. First, the bond you are configuring must be taken down: As an example, to enable MII monitoring on bond0 with a 1 second interval, you could run (as root): To configure bond0 for balance-alb mode, you could run either: ...or, using the name of the mode: After configuring options for the bond in question, you can bring it up and test it by running ifconfig bond <N> up . If you decide to change the options, take the interface down, modify its parameters using sysfs , bring it back up, and re-test. Once you have determined the best set of parameters for your bond, add those parameters as a space-separated list to the BONDING_OPTS= directive of the /etc/sysconfig/network-scripts/ifcfg-bond <N> file for the bonding interface you are configuring. Whenever that bond is brought up (for example, by the system during the boot sequence if the ONBOOT=yes directive is set), the bonding options specified in the BONDING_OPTS will take effect for that bond. For more information on configuring bonding interfaces (and BONDING_OPTS ), see Section 11.2.4, "Channel Bonding Interfaces" . The following list provides the names of many of the more common channel bonding parameters, along with a descriptions of what they do. For more information, see the brief descriptions for each parm in modinfo bonding output, or the exhaustive descriptions in the bonding.txt file in the kernel-doc package (see Section 31.9, "Additional Resources" ). Bonding Interface Parameters arp_interval= <time_in_milliseconds> Specifies (in milliseconds) how often ARP monitoring occurs. When configuring this setting, a good starting point for this parameter is 1000 . Important It is essential that both arp_interval and arp_ip_target parameters are specified, or, alternatively, the miimon parameter is specified. Failure to do so can cause degradation of network performance in the event that a link fails. If using this setting while in mode=0 or mode=2 (the two load-balancing modes), the network switch must be configured to distribute packets evenly across the NICs. For more information on how to accomplish this, see the bonding.txt file in the kernel-doc package (see Section 31.9, "Additional Resources" ). The value is set to 0 by default, which disables it. arp_ip_target= <ip_address> \ufeff[ , <ip_address_2> ,... <ip_address_16> \ufeff ] Specifies the target IP address of ARP requests when the arp_interval parameter is enabled. Up to 16 IP addresses can be specified in a comma separated list. arp_validate= <value> Validate source/distribution of ARP probes; default is none . Other valid values are active , backup , and all . downdelay= <time_in_milliseconds> Specifies (in milliseconds) how long to wait after link failure before disabling the link. The value must be a multiple of the value specified in the miimon parameter. The value is set to 0 by default, which disables it. lacp_rate= <value> Specifies the rate at which link partners should transmit LACPDU packets in 802.3ad mode. Possible values are: slow or 0 - Default setting. This specifies that partners should transmit LACPDUs every 30 seconds. fast or 1 - Specifies that partners should transmit LACPDUs every 1 second. miimon= <time_in_milliseconds> Specifies (in milliseconds) how often MII link monitoring occurs. This is useful if high availability is required because MII is used to verify that the NIC is active. To verify that the driver for a particular NIC supports the MII tool, type the following command as root: In this command, replace <interface_name > with the name of the device interface, such as eth0 , not the bond interface. If MII is supported, the command returns: If using a bonded interface for high availability, the module for each NIC must support MII. Setting the value to 0 (the default), turns this feature off. When configuring this setting, a good starting point for this parameter is 100 . Important It is essential that both arp_interval and arp_ip_target parameters are specified, or, alternatively, the miimon parameter is specified. Failure to do so can cause degradation of network performance in the event that a link fails. mode= <value> Allows you to specify the bonding policy. The <value> can be one of: balance-rr or 0 - Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available. active-backup or 1 - Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails. balance-xor or 2 - Sets an XOR (exclusive-or) policy for fault tolerance and load balancing. Using this method, the interface matches up the incoming request's MAC address with the MAC address for one of the slave NICs. Once this link is established, transmissions are sent out sequentially beginning with the first available interface. broadcast or 3 - Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces. 802.3ad or 4 - Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant. balance-tlb or 5 - Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. balance-alb or 6 - Sets an Adaptive Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPv4 traffic. Receive load balancing is achieved through ARP negotiation. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. num_unsol_na= <number> Specifies the number of unsolicited IPv6 Neighbor Advertisements to be issued after a failover event. One unsolicited NA is issued immediately after the failover. The valid range is 0 - 255 ; the default value is 1 . This parameter affects only the active-backup mode. primary= <interface_name> Specifies the interface name, such as eth0 , of the primary device. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode. See the bonding.txt file in the kernel-doc package (see Section 31.9, "Additional Resources" ). primary_reselect= <value> Specifies the reselection policy for the primary slave. This affects how the primary slave is chosen to become the active slave when failure of the active slave or recovery of the primary slave occurs. This parameter is designed to prevent flip-flopping between the primary slave and other slaves. Possible values are: always or 0 (default) - The primary slave becomes the active slave whenever it comes back up. better or 1 - The primary slave becomes the active slave when it comes back up, if the speed and duplex of the primary slave is better than the speed and duplex of the current active slave. failure or 2 - The primary slave becomes the active slave only if the current active slave fails and the primary slave is up. The primary_reselect setting is ignored in two cases: If no slaves are active, the first slave to recover is made the active slave. When initially enslaved, the primary slave is always made the active slave. Changing the primary_reselect policy via sysfs will cause an immediate selection of the best active slave according to the new policy. This may or may not result in a change of the active slave, depending upon the circumstances updelay= <time_in_milliseconds> Specifies (in milliseconds) how long to wait before enabling a link. The value must be a multiple of the value specified in the miimon parameter. The value is set to 0 by default, which disables it. use_carrier= <number> Specifies whether or not miimon should use MII/ETHTOOL ioctls or netif_carrier_ok() to determine the link state. The netif_carrier_ok() function relies on the device driver to maintains its state with netif_carrier_ on/off ; most device drivers support this function. The MII/ETHROOL ioctls tools utilize a deprecated calling sequence within the kernel. However, this is still configurable in case your device driver does not support netif_carrier_ on/off . Valid values are: 1 - Default setting. Enables the use of netif_carrier_ok() . 0 - Enables the use of MII/ETHTOOL ioctls. Note If the bonding interface insists that the link is up when it should not be, it is possible that your network device driver does not support netif_carrier_ on/off . xmit_hash_policy= <value> Selects the transmit hash policy used for slave selection in balance-xor and 802.3ad modes. Possible values are: 0 or layer2 - Default setting. This parameter uses the XOR of hardware MAC addresses to generate the hash. The formula used is: This algorithm will place all traffic to a particular network peer on the same slave, and is 802.3ad compliant. 1 or layer3+4 - Uses upper layer protocol information (when available) to generate the hash. This allows for traffic to a particular network peer to span multiple slaves, although a single connection will not span multiple slaves. The formula for unfragmented TCP and UDP packets used is: For fragmented TCP or UDP packets and all other IP protocol traffic, the source and destination port information is omitted. For non-IP traffic, the formula is the same as the layer2 transmit hash policy. This policy intends to mimic the behavior of certain switches; particularly, Cisco switches with PFC2 as well as some Foundry and IBM products. The algorithm used by this policy is not 802.3ad compliant. 2 or layer2+3 - Uses a combination of layer2 and layer3 protocol information to generate the hash. Uses XOR of hardware MAC addresses and IP addresses to generate the hash. The formula is: This algorithm will place all traffic to a particular network peer on the same slave. For non-IP traffic, the formula is the same as for the layer2 transmit hash policy. This policy is intended to provide a more balanced distribution of traffic than layer2 alone, especially in environments where a layer3 gateway device is required to reach most destinations. This algorithm is 802.3ad compliant. | [
"~]# ifconfig bond0 up",
"~]# ifconfig bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) eth0 Link encap:Ethernet HWaddr 52:54:00:26:9E:F1 inet addr:192.168.122.251 Bcast:192.168.122.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe26:9ef1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:207 errors:0 dropped:0 overruns:0 frame:0 TX packets:205 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:70374 (68.7 KiB) TX bytes:25298 (24.7 KiB) [output truncated]",
"~]# cat /sys/class/net/bonding_masters bond0",
"~]# ifconfig bond0 down",
"~]# echo 1000 > /sys/class/net/bond0/bonding/miimon",
"~]# echo 6 > /sys/class/net/bond0/bonding/mode",
"~]# echo balance-alb > /sys/class/net/bond0/bonding/mode",
"~]# ethtool <interface_name> | grep \"Link detected:\"",
"Link detected: yes",
"( <source_MAC_address> XOR <destination_MAC> ) MODULO <slave_count>",
"(( <source_port> XOR <dest_port> ) XOR (( <source_IP> XOR <dest_IP> ) AND 0xffff ) MODULO <slave_count>",
"((( <source_IP> XOR <dest_IP> ) AND 0xffff ) XOR ( <source_MAC> XOR <destination_MAC> )) MODULO <slave_count>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Specific_Kernel_Module_Capabilities |
Chapter 40. Migrating to IdM on RHEL 7 from FreeIPA on non-RHEL Linux distributions | Chapter 40. Migrating to IdM on RHEL 7 from FreeIPA on non-RHEL Linux distributions To migrate a FreeIPA deployment on a non-RHEL Linux distribution to an Identity Management (IdM) deployment on RHEL 7 servers, you must first add a new RHEL 7 IdM Certificate Authority (CA) replica to your existing FreeIPA environment, transfer certificate-related roles to it, and then retire the non-RHEL FreeIPA servers. Important Performing an in-place conversion of a non-RHEL FreeIPA server to a RHEL 7 IdM server using the Convert2RHEL tool is not supported. Prerequisites You have determined the domain level of your non-RHEL FreeIPA certificate authority (CA) renewal server. For more information, see Displaying the Current Domain Level . You have installed RHEL 7.9 on the system that you want to become the new CA renewal server. Procedure To perform the migration, follow the same procedure as Migrating Identity Management from Red Hat Enterprise Linux 6 to Version 7 , with your non-RHEL FreeIPA CA server acting as the RHEL 6 server: If the original non-RHEL CA renewal server is running FreeIPA version 3.1 or older, Update the Identity Management Schema . To display the installed FreeIPA version, use the ipa --version command. Configure a RHEL 7 server and add it as an IdM replica to your current FreeIPA environment on the non-RHEL Linux distribution. If the domain level for your domain is 0, see Installing the RHEL 7 Replica . If the domain level is 1, follow the steps described in Creating the Replica: Introduction . Make the RHEL 7 replica the CA renewal server, stop generating the certificate revocation list (CRL) on the non-RHEL server and redirect CRL requests to the RHEL 7 replica. For details, see Transitioning the CA Services to the Red Hat Enterprise Linux 7 Server . Stop the original non-RHEL FreeIPA CA renewal server to force domain discovery to the new RHEL 7 server. For details, see Stop the Red Hat Enterprise Linux 6 Server . Install new replicas on other RHEL 7 systems and decommission the non-RHEL server. For details, see steps after migrating the master CA server . Important Red Hat recommends having IdM replicas of only one major RHEL version in your topology. For this reason, do not delay decommissioning the old server. Additional resources Migrating Identity Management from Red Hat Enterprise Linux 6 to Version 7 | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/migrating_to_idm_on_rhel_7_from_freeipa_on_non-rhel_linux_distributions |
Chapter 2. Logging 6.0 | Chapter 2. Logging 6.0 2.1. Release notes 2.1.1. Logging 6.0.3 This release includes RHBA-2024:10991 . 2.1.1.1. New features and enhancements With this update, the Loki Operator supports the configuring of the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. ( LOG-6421 ) 2.1.1.2. Bug fixes Before this update, the collector used the default settings to collect audit logs, which did not account for back pressure from output receivers. With this update, the audit log collection is optimized for file handling and log reading. ( LOG-6034 ) Before this update, any namespace containing openshift or kube was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces: default , kube , openshift , and namespaces that begin with openshift- or kube- . ( LOG-6204 ) Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. ( LOG-6343 ) Before this update, pipeline validation might enter an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. ( LOG-6352 ) Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. ( LOG-6406 ) Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. ( LOG-6441 ) Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. ( LOG-6486 ) Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. ( LOG-6543 ) 2.1.1.3. CVEs CVE-2019-12900 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-10963 CVE-2024-50602 2.1.2. Logging 6.0.2 This release includes RHBA-2024:10051 . 2.1.2.1. Bug fixes Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. ( LOG-5325 ) Before this update, the collector would discard audit log messages that exceeded the configured threshold. This modifies the audit configuration thresholds for the maximum line size as well as the number of bytes read during a read cycle. ( LOG-5998 ) Before this update, the Cluster Logging Operator did not watch and reconcile resources associated with an instance of a ClusterLogForwarder like it did in prior releases. This update modifies the operator to watch and reconcile all resources it owns and creates. ( LOG-6264 ) Before this update, log events with an unknown severity level sent to Google Cloud Logging would trigger a warning in the vector collector, which would then default the severity to 'DEFAULT'. With this update, log severity levels are now standardized to match Google Cloud Logging specifications, and audit logs are assigned a severity of 'INFO'. ( LOG-6296 ) Before this update, when infrastructure namespaces were included in application inputs, the log_type was set as application . With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure . ( LOG-6354 ) Before this update, specifying a value for the syslog.enrichment field of the ClusterLogForwarder added namespace_name , container_name , and pod_name to the messages of non-container logs. With this update, only container logs include namespace_name , container_name , and pod_name in their messages when syslog.enrichment is set. ( LOG-6402 ) 2.1.2.2. CVEs CVE-2024-6119 CVE-2024-6232 2.1.3. Logging 6.0.1 This release includes OpenShift Logging Bug Fix Release 6.0.1 . 2.1.3.1. Bug fixes With this update, the default memory limit for the collector has been increased from 1024 Mi to 2024 Mi. However, users should always adjust their resource limits according to their cluster specifications and needs. ( LOG-6180 ) Before this update, the Loki Operator failed to add the default namespace label to all AlertingRule resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. ( LOG-6151 ) Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. ( LOG-6129 ) Before this update, it was possible to set log_source in the prune filter, which could lead to inconsistent log data. With this update, the configuration is validated before being applied, and any configuration that includes log_source in the prune filter is rejected. ( LOG-6202 ) 2.1.3.2. CVEs CVE-2024-24791 CVE-2024-34155 CVE-2024-34156 CVE-2024-34158 CVE-2024-6104 CVE-2024-6119 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 2.1.4. Logging 6.0.0 This release includes Logging for Red Hat OpenShift Bug Fix Release 6.0.0 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Table 2.1. Upstream component versions logging Version Component Version Operator eventrouter logfilemetricexporter loki lokistack-gateway opa-openshift vector 6.0 0.4 1.1 3.1.0 0.1 0.1 0.37.1 2.1.5. Removal notice With this release, logging no longer supports the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io custom resources. Refer to the product documentation for details on the replacement features. ( LOG-5803 ) With this release, logging no longer manages or deploys log storage (such as Elasticsearch), visualization (such as Kibana), or Fluentd-based log collectors. ( LOG-5368 ) Note In order to continue to use Elasticsearch and Kibana managed by the elasticsearch-operator, the administrator must modify those object's ownerRefs before deleting the ClusterLogging resource. 2.1.6. New features and enhancements This feature introduces a new architecture for logging for Red Hat OpenShift by shifting component responsibilities to their relevant Operators, such as for storage, visualization, and collection. It introduces the ClusterLogForwarder.observability.openshift.io API for log collection and forwarding. Support for the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io APIs, along with the Red Hat managed Elastic stack (Elasticsearch and Kibana), is removed. Users are encouraged to migrate to the Red Hat LokiStack for log storage. Existing managed Elasticsearch deployments can be used for a limited time. Automated migration for log collection is not provided, so administrators need to create a new ClusterLogForwarder.observability.openshift.io specification to replace their custom resources. Refer to the official product documentation for more details. ( LOG-3493 ) With this release, the responsibility for deploying the logging view plugin shifts from the Red Hat OpenShift Logging Operator to the Cluster Observability Operator (COO). For new log storage installations that need visualization, the Cluster Observability Operator and the associated UIPlugin resource must be deployed. Refer to the Cluster Observability Operator Overview product documentation for more details. ( LOG-5461 ) This enhancement sets default requests and limits for Vector collector deployments' memory and CPU usage based on Vector documentation recommendations. ( LOG-4745 ) This enhancement updates Vector to align with the upstream version v0.37.1. ( LOG-5296 ) This enhancement introduces an alert that triggers when log collectors buffer logs to a node's file system and use over 15% of the available space, indicating potential back pressure issues. ( LOG-5381 ) This enhancement updates the selectors for all components to use common Kubernetes labels. ( LOG-5906 ) This enhancement changes the collector configuration to deploy as a ConfigMap instead of a secret, allowing users to view and edit the configuration when the ClusterLogForwarder is set to Unmanaged. ( LOG-5599 ) This enhancement adds the ability to configure the Vector collector log level using an annotation on the ClusterLogForwarder, with options including trace, debug, info, warn, error, or off. ( LOG-5372 ) This enhancement adds validation to reject configurations where Amazon CloudWatch outputs use multiple AWS roles, preventing incorrect log routing. ( LOG-5640 ) This enhancement removes the Log Bytes Collected and Log Bytes Sent graphs from the metrics dashboard. ( LOG-5964 ) This enhancement updates the must-gather functionality to only capture information for inspecting Logging 6.0 components, including Vector deployments from ClusterLogForwarder.observability.openshift.io resources and the Red Hat managed LokiStack. ( LOG-5949 ) This enhancement improves Azure storage secret validation by providing early warnings for specific error conditions. ( LOG-4571 ) This enhancement updates the ClusterLogForwarder API to follow the Kubernetes standards. ( LOG-5977 ) Example of a new configuration in the ClusterLogForwarder custom resource for the updated API apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce 2.1.7. Technology Preview features This release introduces a Technology Preview feature for log forwarding using OpenTelemetry. A new output type,` OTLP`, allows sending JSON-encoded log records using the OpenTelemetry data model and resource semantic conventions. ( LOG-4225 ) 2.1.8. Bug fixes Before this update, the CollectorHighErrorRate and CollectorVeryHighErrorRate alerts were still present. With this update, both alerts are removed in the logging 6.0 release but might return in a future release. ( LOG-3432 ) 2.1.9. CVEs CVE-2024-34397 2.2. Logging 6.0 The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. 2.2.1. Inputs and Outputs Inputs specify the sources of logs to be forwarded. Logging provides built-in input types: application , infrastructure , and audit , which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. 2.2.2. Receiver Input Type The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog . The ReceiverSpec defines the configuration for a receiver input. 2.2.3. Pipelines and Filters Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. 2.2.4. Operator Behavior The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field: When set to Managed (default), the operator actively manages the logging resources to match the configuration defined in the spec. When set to Unmanaged , the operator does not take any action, allowing you to manually manage the logging components. 2.2.5. Validation Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. 2.2.6. Quick Start Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Procedure Install the Red Hat OpenShift Logging Operator , Loki Operator , and Cluster Observability Operator (COO) from OperatorHub. Create a secret to access an existing object storage bucket: Example command for AWS USD oc create secret generic logging-loki-s3 \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" \ -n openshift-logging Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Create a service account for the collector: USD oc create sa collector -n openshift-logging Bind the ClusterRole to the service account: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging Create a UIPlugin to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Add additional roles to the collector service account: USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack Verification Verify that logs are visible in the Log section of the Observe tab in the OpenShift Container Platform web console. 2.3. Upgrading to Logging 6.0 Logging v6.0 is a significant upgrade from releases, achieving several longstanding goals of Cluster Logging: Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization). Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana). Deprecation of the Fluentd log collector implementation. Removal of support for ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io resources. Note The cluster-logging-operator does not provide an automated upgrade process. Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator . This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io resources for common use cases are included. 2.3.1. Using the oc explain command The oc explain command is an essential tool in the OpenShift CLI oc that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster. 2.3.1.1. Resource Descriptions oc explain offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators. To view the documentation for the outputs field of the ClusterLogForwarder custom resource, you can use: USD oc explain clusterlogforwarders.observability.openshift.io.spec.outputs Note In place of clusterlogforwarder the short form obsclf can be used. This will display detailed information about these fields, including their types, default values, and any associated sub-fields. 2.3.1.2. Hierarchical Structure The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options. For instance, here's how you can drill down into the storage configuration for a LokiStack custom resource: USD oc explain lokistacks.loki.grafana.com USD oc explain lokistacks.loki.grafana.com.spec USD oc explain lokistacks.loki.grafana.com.spec.storage USD oc explain lokistacks.loki.grafana.com.spec.storage.schemas Each command reveals a deeper level of the resource specification, making the structure clear. 2.3.1.3. Type Information oc explain also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types. For example: USD oc explain lokistacks.loki.grafana.com.spec.size This will show that size should be defined using an integer value. 2.3.1.4. Default Values When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified. Again using lokistacks.loki.grafana.com as an example: USD oc explain lokistacks.spec.template.distributor.replicas Example output GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component. 2.3.2. Log Storage The only managed log storage solution available in this release is a Lokistack, managed by the loki-operator . This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process. Important To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the elasticsearch-operator , remove the owner references from the Elasticsearch resource named elasticsearch , and the Kibana resource named kibana in the openshift-logging namespace before removing the ClusterLogging resource named instance in the same namespace. Temporarily set ClusterLogging to state Unmanaged USD oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge Remove ClusterLogging ownerReferences from the Elasticsearch resource The following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource's logStore field will no longer affect the Elasticsearch resource. USD oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge Remove ClusterLogging ownerReferences from the Kibana resource The following command ensures that ClusterLogging no longer owns the Kibana resource. Updates to the ClusterLogging resource's visualization field will no longer affect the Kibana resource. USD oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge Set ClusterLogging to state Managed USD oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge 2.3.3. Log Visualization The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator . 2.3.4. Log Collection and Forwarding Log collection and forwarding configurations are now specified under the new API , part of the observability.openshift.io API group. The following sections highlight the differences from the old API resources. Note Vector is the only supported collector implementation. 2.3.5. Management, Resource Allocation, and Workload Scheduling Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API. Configuration apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" spec: managementState: "Managed" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {} Current Configuration apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {} 2.3.6. Input Specifications The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application , infrastructure , and audit to collect these sources. 2.3.6.1. Application Inputs Namespace and container inclusions and exclusions have been consolidated into a single field. 5.9 Application Input with Namespace and Container Includes and Excludes apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose 6.0 Application Input with Namespace and Container Includes and Excludes apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose Note application , infrastructure , and audit are reserved words and cannot be used as names when defining an input. 2.3.6.2. Input Receivers Changes to input receivers include: Explicit configuration of the type at the receiver level. Port settings moved to the receiver level. 5.9 Input Receivers apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442 6.0 Input Receivers apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442 2.3.7. Output Specifications High-level changes to output specifications include: URL settings moved to each output type specification. Tuning parameters moved to each output type specification. Separation of TLS configuration from authentication. Explicit configuration of keys and secret/configmap for TLS and authentication. 2.3.8. Secrets and TLS Configuration Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions. 2.3.9. Red Hat Managed Elasticsearch v5.9 Forwarding to Red Hat Managed Elasticsearch apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch v6.0 Forwarding to Red Hat Managed Elasticsearch apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: serviceAccount: name: <service_account_name> managementState: Managed outputs: - name: audit-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: audit-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: app-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: app-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: infra-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: infra-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - name: app inputRefs: - application outputRefs: - app-elasticsearch - name: audit inputRefs: - audit outputRefs: - audit-elasticsearch - name: infra inputRefs: - infrastructure outputRefs: - infra-elasticsearch 2.3.10. Red Hat Managed LokiStack v5.9 Forwarding to Red Hat Managed LokiStack apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev v6.0 Forwarding to Red Hat Managed LokiStack apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure 2.3.11. Filters and Pipeline Configuration Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from releases have been converted to filters in this release. Individual filters are defined in the filters specification and referenced by a pipeline. 5.9 Filters apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true 6.0 Filter Configuration apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json 2.3.12. Validation and Status Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time. Instances of the ClusterLogForwarder.observability.openshift.io must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is: 6.0 Status Conditions apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: "2024-09-13T03:28:44Z" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: "True" type: observability.openshift.io/Authorized - lastTransitionTime: "2024-09-13T12:16:45Z" message: "" reason: ValidationSuccess status: "True" type: observability.openshift.io/Valid - lastTransitionTime: "2024-09-13T12:16:45Z" message: "" reason: ReconciliationComplete status: "True" type: Ready filterConditions: - lastTransitionTime: "2024-09-13T13:02:59Z" message: filter "detectexception" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: "2024-09-13T13:02:59Z" message: filter "parse-json" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: "2024-09-13T12:23:03Z" message: input "application1" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: "2024-09-13T13:02:59Z" message: output "default-lokistack-application1" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: "2024-09-13T03:28:44Z" message: pipeline "default-before" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidPipeline-default-before Note Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue. 2.4. Configuring log forwarding The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. Key Functions of the ClusterLogForwarder Selects log messages using inputs Forwards logs to external destinations using outputs Filters, transforms, and drops log messages using filters Defines log forwarding pipelines connecting inputs, filters and outputs 2.4.1. Setting up log collection This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder . This was not required in releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. The Red Hat OpenShift Logging Operator provides collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. Setup log collection by binding the required cluster roles to your service account. 2.4.1.1. Legacy service accounts To use the existing legacy service account logcollector , create the following ClusterRoleBinding : USD oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector Additionally, create the following ClusterRoleBinding if collecting audit logs: USD oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector 2.4.1.2. Creating service accounts Prerequisites The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. You have administrator permissions. Procedure Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. Bind the appropriate cluster roles to the service account: Example binding command USD oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> 2.4.1.2.1. Cluster Role Binding for your Service Account The role_binding.yaml file binds the ClusterLogging operator's ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8 1 roleRef: References the ClusterRole to which the binding applies. 2 apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. 3 kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. 4 name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. 5 subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. 6 kind: Specifies that the subject is a ServiceAccount. 7 Name: The name of the ServiceAccount being granted the permissions. 8 namespace: Indicates the namespace where the ServiceAccount is located. 2.4.1.2.2. Writing application logs The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system. 2.4.1.2.3. Writing audit logs The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 1 rules: Defines the permissions granted by this ClusterRole. 2 2 apiGroups: Specifies the API group loki.grafana.com. 3 3 loki.grafana.com: The API group responsible for Loki logging resources. 4 4 resources: Refers to the resource type this role manages, in this case, audit. 5 5 audit: Specifies that the role manages audit logs within Loki. 6 6 resourceNames: Defines the specific resources that the role can access. 7 7 logs: Refers to the logs that can be managed under this role. 8 8 verbs: The actions allowed on the resources. 9 9 create: Grants permission to create new audit logs. 2.4.1.2.4. Writing infrastructure logs The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. Sample YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Specifies the API group for Loki-related resources. 3 loki.grafana.com: The API group managing the Loki logging system. 4 resources: Defines the resource type that this role can interact with. 5 infrastructure: Refers to infrastructure-related resources that this role manages. 6 resourceNames: Specifies the names of resources this role can manage. 7 logs: Refers to the log resources related to infrastructure. 8 verbs: The actions permitted by this role. 9 create: Grants permission to create infrastructure logs in the Loki system. 2.4.1.2.5. ClusterLogForwarder editor role The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Refers to the OpenShift-specific API group 3 obervability.openshift.io: The API group for managing observability resources, like logging. 4 resources: Specifies the resources this role can manage. 5 clusterlogforwarders: Refers to the log forwarding resources in OpenShift. 6 verbs: Specifies the actions allowed on the ClusterLogForwarders. 7 create: Grants permission to create new ClusterLogForwarders. 8 delete: Grants permission to delete existing ClusterLogForwarders. 9 get: Grants permission to retrieve information about specific ClusterLogForwarders. 10 list: Allows listing all ClusterLogForwarders. 11 patch: Grants permission to partially modify ClusterLogForwarders. 12 update: Grants permission to update existing ClusterLogForwarders. 13 watch: Grants permission to monitor changes to ClusterLogForwarders. 2.4.2. Modifying log level in collector To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace , debug , info , warn , error , and off . Example log level annotation apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug # ... 2.4.3. Managing the Operator The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: Managed (default) The operator will drive the logging resources to match the desired state in the CLF spec. Unmanaged The operator will not take any action related to the logging components. This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged . 2.4.4. Structure of the ClusterLogForwarder The CLF has a spec section that contains the following key components: Inputs Select log messages to be forwarded. Built-in input types application , infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. Outputs Define destinations to forward logs to. Each output has a unique name and type-specific configuration. Pipelines Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. Filters Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. 2.4.4.1. Inputs Inputs are configured in an array under spec.inputs . There are three built-in input types: application Selects logs from all application containers, excluding those in infrastructure namespaces. infrastructure Selects logs from nodes and from infrastructure components running in the following namespaces: default kube openshift Containing the kube- or openshift- prefix audit Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. 2.4.4.2. Outputs Outputs are configured in an array under spec.outputs . Each output must have a unique name and a type. Supported types are: azureMonitor Forwards logs to Azure Monitor. cloudwatch Forwards logs to AWS CloudWatch. elasticsearch Forwards logs to an external Elasticsearch instance. googleCloudLogging Forwards logs to Google Cloud Logging. http Forwards logs to a generic HTTP endpoint. kafka Forwards logs to a Kafka broker. loki Forwards logs to a Loki logging backend. lokistack Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy otlp Forwards logs using the OpenTelemetry Protocol. splunk Forwards logs to Splunk. syslog Forwards logs to an external syslog server. Each output type has its own configuration fields. 2.4.4.3. Pipelines Pipelines are configured in an array under spec.pipelines . Each pipeline must have a unique name and consists of: inputRefs Names of inputs whose logs should be forwarded to this pipeline. outputRefs Names of outputs to send logs to. filterRefs (optional) Names of filters to apply. The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. 2.4.4.4. Filters Filters are configured in an array under spec.filters . They can match incoming log messages based on the value of structured fields and modify or drop them. Administrators can configure the following types of filters: 2.4.4.5. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10) To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters . Example ClusterLogForwarder CR apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name> 2.4.4.5.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. The collector supports the following languages: Java JS Ruby Python Golang PHP Dart 2.4.4.6. Configuring content filters to drop unwanted log records When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. Procedure Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ... 1 Specifies the type of filter. The drop filter drops log records that match the filter configuration. 2 Specifies configuration options for applying the drop filter. 3 Specifies the configuration for tests that are used to evaluate whether a log record is dropped. If all the conditions specified for a test are true, the test passes and the log record is dropped. When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. 4 Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. 5 Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 6 Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 7 Specifies the pipeline that the drop filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional examples The following additional example shows how you can configure the drop filter to only keep higher priority log records: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ... In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: "^open" - test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ... 2.4.4.7. Overview of API audit filter OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: None : The event is dropped. Metadata : Audit metadata is included, request and response bodies are removed. Request : Audit metadata and the request body are included, the response body is removed. RequestResponse : All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy , while providing the following additional functions: Wildcards Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication . Resource \*/status matches Pod/status or Deployment/status . Default Rules Events that do not match any rule in the policy are filtered as follows: Read-only system events such as get , list , and watch are dropped. Service account write events that occur within the same namespace as the service account are dropped. All other events are forwarded, subject to any configured rate limits. To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. Omit Response Codes A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429] . If the value is an empty list, [] , then no status codes are omitted. The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. Note You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. Example audit policy apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 2 The name of your audit policy. 2.4.4.8. Filtering application logs at input by including the label expressions or a matching label key and values You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. Procedure Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 type: application # ... 1 Specifies the label key to match. 2 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 3 Specifies an array of string values. If the operator value is either Exists or DoesNotExist , the value array must be empty. 4 Specifies an exact key or value mapping. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.4.9. Configuring content filters to prune log records When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. Procedure Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: Important If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ... 1 Specify the type of filter. The prune filter prunes log records by configured fields. 2 Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . 3 Optional: Any fields that are specified in this array are removed from the log record. 4 Optional: Any fields that are not specified in this array are removed from the log record. 5 Specify the pipeline that the prune filter is applied to. Note The filters exempts the log_type , .log_source , and .message fields. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.5. Filtering the audit and infrastructure log inputs by source You can define the list of audit and infrastructure sources to collect the logs by using the input selector. Procedure Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ... 1 Specifies the list of infrastructure sources to collect. The valid sources include: node : Journal log from the node container : Logs from the workloads deployed in the namespaces 2 Specifies the list of audit sources to collect. The valid sources include: kubeAPI : Logs from the Kubernetes API servers openshiftAPI : Logs from the OpenShift API servers auditd : Logs from a node auditd service ovn : Logs from an open virtual network service Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.6. Filtering application logs at input by including or excluding the namespace or container name You can include or exclude the application logs based on the namespace and container name by using the input selector. Procedure Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 type: application # ... 1 Specifies that the logs are only collected from these namespaces. 2 Specifies that the logs are only collected from these containers. 3 Specifies the pattern of namespaces to ignore when collecting the logs. 4 Specifies the set of containers to ignore when collecting the logs. Note The excludes field takes precedence over the includes field. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.5. Storing logs with LokiStack You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. Important For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. 2.5.1. Prerequisites You have installed the Loki Operator by using the CLI or web console. You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder . The serviceAccount is assigned collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles. 2.5.2. Core Setup and Configuration Role-based access controls, basic monitoring, and pod placement to deploy Loki. 2.5.3. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. Important It is not possible to change the number 1x for the deployment size. Table 2.2. Loki sizing 1x.demo 1x.pico [6.1+ only] 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 50GB/day 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 2 Total CPU requests None 7 vCPUs 14 vCPUs 34 vCPUs 54 vCPUs Total CPU requests if using the ruler None 8 vCPUs 16 vCPUs 42 vCPUs 70 vCPUs Total memory requests None 17Gi 31Gi 67Gi 139Gi Total memory requests if using the ruler None 18Gi 35Gi 83Gi 171Gi Total disk requests 40Gi 590Gi 430Gi 430Gi 590Gi Total disk requests if using the ruler 80Gi 910Gi 750Gi 750Gi 910Gi 2.5.4. Authorizing LokiStack rules RBAC permissions Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. The following cluster roles for alerting and recording rules are available for LokiStack: Rule name Description alertingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group. alertingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. alertingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete AlertingRule resources. alertingrules.loki.grafana.com-v1-view Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. recordingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group. recordingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. recordingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete RecordingRule resources. recordingrules.loki.grafana.com-v1-view Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. 2.5.4.1. Examples To apply cluster roles for a user, you must bind an existing cluster role to a specific username. Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: Example cluster role binding command for alerting rule CRUD permissions in a specific namespace USD oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username> The following command gives the specified user administrator permissions for alerting rules in all namespaces: Example cluster role binding command for administrator permissions USD oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username> 2.5.5. Creating a log-based alerting rule with Loki The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. If an AlertingRule CR includes an invalid LogQL expr , it is an invalid alerting rule. If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. If none of the above applies, an alerting rule is considered valid. Table 2.3. AlertingRule definitions Tenant type Valid namespaces for AlertingRule CRs application <your_application_namespace> audit openshift-logging infrastructure openshift-/* , kube-/\* , default Procedure Create an AlertingRule custom resource (CR): Example infrastructure AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 AlertingRule CRs for infrastructure tenants are only supported in the openshift-* , kube-\* , or default namespaces. 4 The value for kubernetes_namespace_name: must match the value for metadata.namespace . 5 The value of this mandatory field must be critical , warning , or info . 6 This field is mandatory. 7 This field is mandatory. Example application AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 Value for kubernetes_namespace_name: must match the value for metadata.namespace . 4 The value of this mandatory field must be critical , warning , or info . 5 The value of this mandatory field is a summary of the rule. 6 The value of this mandatory field is a detailed description of the rule. Apply the AlertingRule CR: USD oc apply -f <filename>.yaml 2.5.6. Configuring Loki to tolerate memberlist creation failure In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 2.5.7. Enabling stream-based retention with Loki You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Schema v13 is recommended. Procedure Create a LokiStack CR: Enable stream-based retention globally as shown in the following example: Example global stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream.spec: limits: Enable stream-based retention per-tenant basis as shown in the following example: Example per-tenant stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml 2.5.8. Loki pod placement You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. Example LokiStack with node selectors apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ... 1 Specifies the component pod type that applies to the node selector. 2 Specifies the pods that are moved to nodes containing the defined label. Example LokiStack CR with node selectors and tolerations apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ... To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: USD oc explain lokistack.spec.template Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ... For more detailed information, you can add a specific field: USD oc explain lokistack.spec.template.compactor Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ... 2.5.8.1. Enhanced Reliability and Performance Configurations to ensure Loki's reliability and efficiency in production. 2.5.8.2. Enabling authentication to cloud-based log stores using short-lived tokens Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Procedure Use one of the following options to enable authentication: If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Example Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> Example AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 2.5.8.3. Configuring Loki to tolerate node failure The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. 2.5.8.4. LokiStack behavior during cluster restarts When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. 2.5.8.5. Advanced Deployment and Scalability Specialized configurations for high availability, scalability, and error handling. 2.5.8.6. Zone aware data replication The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small , 1x.small , or 1x.medium , the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 2.5.8.7. Recovering Loki pods from failed zones In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: USD oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: USD oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: USD oc delete pvc <pvc_name> -n openshift-logging Delete the pod(s) by running the following command: USD oc delete pod <pod_name> -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 2.5.8.7.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. USD oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging 2.5.8.8. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 2.6. Visualization for logging Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator , which requires Operator installation. Important Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA. | [
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce",
"oc create secret generic logging-loki-s3 --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\" -n openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"oc explain clusterlogforwarders.observability.openshift.io.spec.outputs",
"oc explain lokistacks.loki.grafana.com oc explain lokistacks.loki.grafana.com.spec oc explain lokistacks.loki.grafana.com.spec.storage oc explain lokistacks.loki.grafana.com.spec.storage.schemas",
"oc explain lokistacks.loki.grafana.com.spec.size",
"oc explain lokistacks.spec.template.distributor.replicas",
"GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component.",
"oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Unmanaged\"}}' --type=merge",
"oc -n openshift-logging patch elasticsearch/elasticsearch -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge",
"oc -n openshift-logging patch kibana/kibana -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge",
"oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Managed\"}}' --type=merge",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: serviceAccount: name: <service_account_name> managementState: Managed outputs: - name: audit-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: audit-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: app-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: app-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: infra-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: infra-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - name: app inputRefs: - application outputRefs: - app-elasticsearch - name: audit inputRefs: - audit outputRefs: - audit-elasticsearch - name: infra inputRefs: - infrastructure outputRefs: - infra-elasticsearch",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: \"True\" type: observability.openshift.io/Authorized - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ValidationSuccess status: \"True\" type: observability.openshift.io/Valid - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ReconciliationComplete status: \"True\" type: Ready filterConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"detectexception\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"parse-json\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: \"2024-09-13T12:23:03Z\" message: input \"application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: output \"default-lokistack-application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: pipeline \"default-before\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidPipeline-default-before",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system.",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/logging-6-0 |
Chapter 11. Securing builds by strategy | Chapter 11. Securing builds by strategy Builds in OpenShift Container Platform are run in privileged containers. Depending on the build strategy used, if you have privileges, you can run builds to escalate their permissions on the cluster and host nodes. And as a security measure, it limits who can run builds and the strategy that is used for those builds. Custom builds are inherently less safe than source builds, because they can execute any code within a privileged container, and are disabled by default. Grant docker build permissions with caution, because a vulnerability in the Dockerfile processing logic could result in a privileges being granted on the host node. By default, all users that can create builds are granted permission to use the docker and Source-to-image (S2I) build strategies. Users with cluster administrator privileges can enable the custom build strategy, as referenced in the restricting build strategies to a user globally section. You can control who can build and which build strategies they can use by using an authorization policy. Each build strategy has a corresponding build subresource. A user must have permission to create a build and permission to create on the build strategy subresource to create builds using that strategy. Default roles are provided that grant the create permission on the build strategy subresource. Table 11.1. Build Strategy Subresources and Roles Strategy Subresource Role Docker builds/docker system:build-strategy-docker Source-to-Image builds/source system:build-strategy-source Custom builds/custom system:build-strategy-custom JenkinsPipeline builds/jenkinspipeline system:build-strategy-jenkinspipeline 11.1. Disabling access to a build strategy globally To prevent access to a particular build strategy globally, log in as a user with cluster administrator privileges, remove the corresponding role from the system:authenticated group, and apply the annotation rbac.authorization.kubernetes.io/autoupdate: "false" to protect them from changes between the API restarts. The following example shows disabling the docker build strategy. Procedure Apply the rbac.authorization.kubernetes.io/autoupdate annotation: USD oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite Remove the role: USD oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated Ensure the build strategy subresources are also removed from the admin and edit user roles: USD oc get clusterrole admin -o yaml | grep "builds/docker" USD oc get clusterrole edit -o yaml | grep "builds/docker" 11.2. Restricting build strategies to users globally You can allow a set of specific users to create builds with a particular strategy. Prerequisites Disable global access to the build strategy. Procedure Assign the role that corresponds to the build strategy to a specific user. For example, to add the system:build-strategy-docker cluster role to the user devuser : USD oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser Warning Granting a user access at the cluster level to the builds/docker subresource means that the user can create builds with the docker strategy in any project in which they can create builds. 11.3. Restricting build strategies to a user within a project Similar to granting the build strategy role to a user globally, you can allow a set of specific users within a project to create builds with a particular strategy. Prerequisites Disable global access to the build strategy. Procedure Assign the role that corresponds to the build strategy to a specific user within a project. For example, to add the system:build-strategy-docker role within the project devproject to the user devuser : USD oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject | [
"oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite",
"oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated",
"oc get clusterrole admin -o yaml | grep \"builds/docker\"",
"oc get clusterrole edit -o yaml | grep \"builds/docker\"",
"oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser",
"oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/builds/securing-builds-by-strategy |
Chapter 10. Incident Response | Chapter 10. Incident Response In the event that the security of a system has been compromised, an incident response is necessary. It is the responsibility of the security team to respond to the problem quickly and effectively. 10.1. Defining Incident Response An incident response is an expedited reaction to a security issue or occurrence. Pertaining to information security, an example would be a security team's actions against a hacker who has penetrated a firewall and is currently sniffing internal network traffic. The incident is the breach of security. The response depends upon how the security team reacts, what they do to minimize damages, and when they restore resources, all while attempting to guarantee data integrity. Think of your organization and how almost every aspect of it relies upon technology and computer systems. If there is a compromise, imagine the potentially devastating results. Besides the obvious system downtime and theft of data, there could be data corruption, identity theft (from online personnel records), embarrassing publicity, or even financially devastating results as customers and business partners learn of and react negatively to news of a compromise. Research into past internal and external security breaches shows that some companies go of business as a result of a serious breach of security. A breach can result in resources rendered unavailable and data being either stolen or corrupted. But one cannot overlook issues that are difficult to calculate financially, such as bad publicity. To gain an accurate idea of how important an efficient incident response is, an organization must calculate the cost of the actual security breach as well as the financial effects of the negative publicity over, in the short and long term. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-response |
13.6. Configuring the Subnet Manager | 13.6. Configuring the Subnet Manager 13.6.1. Determining Necessity Most InfiniBand switches come with an embedded subnet manager. However, if a more up to date subnet manager is required than the one in the switch firmware, or if more complete control than the switch manager allows is required, Red Hat Enterprise Linux 7 includes the opensm subnet manager. All InfiniBand networks must have a subnet manager running for the network to function. This is true even when doing a simple network of two machines with no switch and the cards are plugged in back to back, a subnet manager is required for the link on the cards to come up. It is possible to have more than one, in which case one will act as controller, and any other subnet managers will act as ports that will take over should the controller subnet manager fail. 13.6.2. Configuring the opensm main configuration file The opensm program keeps its main configuration file in /etc/rdma/opensm.conf . Users may edit this file at any time and edits will be kept on upgrade. There is extensive documentation of the options in the file itself. However, for the two most common edits needed, setting the GUID to bind to and the PRIORITY to run with, it is highly recommended that the opensm.conf file is not edited but instead edit /etc/sysconfig/opensm . If there are no edits to the base /etc/rdma/opensm.conf file, it will get upgraded whenever the opensm package is upgraded. As new options are added to this file regularly, this makes it easier to keep the current configuration up to date. If the opensm.conf file has been changed, then on upgrade, it might be necessary to merge new options into the edited file. 13.6.3. Configuring the opensm startup options The options in the /etc/sysconfig/opensm file control how the subnet manager is actually started, as well as how many copies of the subnet manager are started. For example, a dual port InfiniBand card, with each port plugged into physically separate networks, will need a copy of the subnet manager running on each port. The opensm subnet manager will only manage one subnet per instance of the application and must be started once for each subnet that needs to be managed. In addition, if there is more than one opensm server, then set the priorities on each server to control which are to be ports and which are to be controller. The file /etc/sysconfig/opensm is used to provide a simple means to set the priority of the subnet manager and to control which GUID the subnet manager binds to. There is an extensive explanation of the options in the /etc/sysconfig/opensm file itself. Users need only read and follow the directions in the file itself to enable failover and multifabric operation of opensm . 13.6.4. Creating a P_Key definition By default, opensm.conf looks for the file /etc/rdma/partitions.conf to get a list of partitions to create on the fabric. All fabrics must contain the 0x7fff subnet, and all switches and all hosts must belong to that fabric. Any other partition can be created in addition to that, and all hosts and all switches do not have to be members of these additional partitions. This allows an administrator to create subnets akin to Ethernet's VLANs on InfiniBand fabrics. If a partition is defined with a given speed, such as 40 Gbps, and there is a host on the network unable to do 40 Gbps, then that host will be unable to join the partition even if it has permission to do so as it will be unable to match the speed requirements, therefore it is recommended that the speed of a partition be set to the slowest speed of any host with permission to join the partition. If a faster partition for some subset of hosts is required, create a different partition with the higher speed. The following partition file would result in a default 0x7fff partition at a reduced speed of 10 Gbps, and a partition of 0x0002 with a speed of 40 Gbps: 13.6.5. Enabling opensm Users need to enable the opensm service as it is not enabled by default when installed. Issue the following command as root : | [
"~]USD more /etc/rdma/partitions.conf For reference: IPv4 IANA reserved multicast addresses: http://www.iana.org/assignments/multicast-addresses/multicast-addresses.txt IPv6 IANA reserved multicast addresses: http://www.iana.org/assignments/ipv6-multicast-addresses/ipv6-multicast-addresses.xml # mtu = 1 = 256 2 = 512 3 = 1024 4 = 2048 5 = 4096 # rate = 2 = 2.5 GBit/s 3 = 10 GBit/s 4 = 30 GBit/s 5 = 5 GBit/s 6 = 20 GBit/s 7 = 40 GBit/s 8 = 60 GBit/s 9 = 80 GBit/s 10 = 120 GBit/s Default=0x7fff, rate=3, mtu=4, scope=2, defmember=full: ALL, ALL_SWITCHES=full; Default=0x7fff, ipoib, rate=3, mtu=4, scope=2: mgid=ff12:401b::ffff:ffff # IPv4 Broadcast address mgid=ff12:401b::1 # IPv4 All Hosts group mgid=ff12:401b::2 # IPv4 All Routers group mgid=ff12:401b::16 # IPv4 IGMP group mgid=ff12:401b::fb # IPv4 mDNS group mgid=ff12:401b::fc # IPv4 Multicast Link Local Name Resolution group mgid=ff12:401b::101 # IPv4 NTP group mgid=ff12:401b::202 # IPv4 Sun RPC mgid=ff12:601b::1 # IPv6 All Hosts group mgid=ff12:601b::2 # IPv6 All Routers group mgid=ff12:601b::16 # IPv6 MLDv2-capable Routers group mgid=ff12:601b::fb # IPv6 mDNS group mgid=ff12:601b::101 # IPv6 NTP group mgid=ff12:601b::202 # IPv6 Sun RPC group mgid=ff12:601b::1:3 # IPv6 Multicast Link Local Name Resolution group ALL=full, ALL_SWITCHES=full; ib0_2=0x0002, rate=7, mtu=4, scope=2, defmember=full: ALL, ALL_SWITCHES=full; ib0_2=0x0002, ipoib, rate=7, mtu=4, scope=2: mgid=ff12:401b::ffff:ffff # IPv4 Broadcast address mgid=ff12:401b::1 # IPv4 All Hosts group mgid=ff12:401b::2 # IPv4 All Routers group mgid=ff12:401b::16 # IPv4 IGMP group mgid=ff12:401b::fb # IPv4 mDNS group mgid=ff12:401b::fc # IPv4 Multicast Link Local Name Resolution group mgid=ff12:401b::101 # IPv4 NTP group mgid=ff12:401b::202 # IPv4 Sun RPC mgid=ff12:601b::1 # IPv6 All Hosts group mgid=ff12:601b::2 # IPv6 All Routers group mgid=ff12:601b::16 # IPv6 MLDv2-capable Routers group mgid=ff12:601b::fb # IPv6 mDNS group mgid=ff12:601b::101 # IPv6 NTP group mgid=ff12:601b::202 # IPv6 Sun RPC group mgid=ff12:601b::1:3 # IPv6 Multicast Link Local Name Resolution group ALL=full, ALL_SWITCHES=full;",
"~]# systemctl enable opensm"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_the_subnet_manager |
4.2.3. Blacklisting By Device Type | 4.2.3. Blacklisting By Device Type You can specify specific device types in the blacklist section of the configuration file with a device section. The following example blacklists all IBM DS4200 and HP devices. | [
"blacklist { device { vendor \"IBM\" product \"3S42\" #DS4200 Product 10 } device { vendor \"HP\" product \"*\" } }"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/device_type_blacklist |
Chapter 5. Gathering data about your cluster | Chapter 5. Gathering data about your cluster When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. It is recommended to provide: Data gathered using the oc adm must-gather command The unique cluster ID 5.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option. For example: USD oc adm must-gather --run-namespace <namespace> \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 5.1.1. Gathering data about your cluster for Red Hat Support You can gather debugging information about your cluster by using the oc adm must-gather CLI command. If you are gathering information to debug a self-managed hosted cluster, see "Gathering information to troubleshoot hosted control planes". Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) is installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is in a disconnected environment, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters in disconnected environments, you must import the default must-gather image as an image stream. USD oc import-image is/must-gather -n openshift Run the oc adm must-gather command: USD oc adm must-gather Important If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. Note Because this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the NotReady and SchedulingDisabled state. If this command fails, for example, if you cannot schedule a pod on your cluster, then use the oc adm inspect command to gather information for particular resources. Note Contact Red Hat Support for the recommended resources to gather. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. Additional resources Gathering information to troubleshoot hosted control planes 5.1.2. Must-gather flags The flags listed in the following table are available to use with the oc adm must-gather command. Table 5.1. OpenShift Container Platform flags for oc adm must-gather Flag Example command Description --all-images oc adm must-gather --all-images=false Collect must-gather data using the default image for all Operators on the cluster that are annotated with operators.openshift.io/must-gather-image . --dest-dir oc adm must-gather --dest-dir='<directory_name>' Set a specific directory on the local machine where the gathered data is written. --host-network oc adm must-gather --host-network=false Run must-gather pods as hostNetwork: true . Relevant if a specific command and image needs to capture host-level data. --image oc adm must-gather --image=[<plugin_image>] Specify a must-gather plugin image to run. If not specified, OpenShift Container Platform's default must-gather image is used. --image-stream oc adm must-gather --image-stream=[<image_stream>] Specify an`<image_stream>` using a namespace or name:tag value containing a must-gather plugin image to run. --node-name oc adm must-gather --node-name='<node>' Set a specific node to use. If not specified, by default a random master is used. --node-selector oc adm must-gather --node-selector='<node_selector_name>' Set a specific node selector to use. Only relevant when specifying a command and image which needs to capture data on a set of cluster nodes simultaneously. --run-namespace oc adm must-gather --run-namespace='<namespace>' An existing privileged namespace where must-gather pods should run. If not specified, a temporary namespace is generated. --since oc adm must-gather --since=<time> Only return logs newer than the specified duration. Defaults to all logs. Plugins are encouraged but not required to support this. Only one since-time or since may be used. --since-time oc adm must-gather --since-time='<date_and_time>' Only return logs after a specific date and time, expressed in ( RFC3339 ) format. Defaults to all logs. Plugins are encouraged but not required to support this. Only one since-time or since may be used. --source-dir oc adm must-gather --source-dir='/<directory_name>/' Set the specific directory on the pod where you copy the gathered data from. --timeout oc adm must-gather --timeout='<time>' The length of time to gather data before timing out, expressed as seconds, minutes, or hours, for example, 3s, 5m, or 2h. Time specified must be higher than zero. Defaults to 10 minutes if not specified. --volume-percentage oc adm must-gather --volume-percentage=<percent> Specify maximum percentage of pod's allocated volume that can be used for must-gather . If this limit is exceeded, must-gather stops gathering, but still copies gathered data. Defaults to 30% if not specified. 5.1.3. Gathering data about specific features You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command. Table 5.2. Supported must-gather images Image Purpose registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 Data collection for OpenShift Virtualization. registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8 Data collection for OpenShift Serverless. registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:<installed_version_service_mesh> Data collection for Red Hat OpenShift Service Mesh. registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v<installed_version_migration_toolkit> Data collection for the Migration Toolkit for Containers. registry.redhat.io/odf4/odf-must-gather-rhel9:v<installed_version_ODF> Data collection for Red Hat OpenShift Data Foundation. registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator:v<installed_version_logging> Data collection for logging. quay.io/netobserv/must-gather Data collection for the Network Observability Operator. registry.redhat.io/openshift4/ose-csi-driver-shared-resource-mustgather-rhel8 Data collection for OpenShift Shared Resource CSI Driver. registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel9:v<installed_version_LSO> Data collection for Local Storage Operator. registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:v<installed_version_sandboxed_containers> Data collection for {sandboxed-containers-first}. registry.redhat.io/workload-availability/node-healthcheck-must-gather-rhel8:v<installed-version-NHC> Data collection for the Red Hat Workload Availability Operators, including the Self Node Remediation (SNR) Operator, the Fence Agents Remediation (FAR) Operator, the Machine Deletion Remediation (MDR) Operator, the Node Health Check Operator (NHC) Operator, and the Node Maintenance Operator (NMO) Operator. registry.redhat.io/numaresources/numaresources-must-gather-rhel9:v<installed-version-nro> Data collection for the NUMA Resources Operator (NRO). registry.redhat.io/openshift4/ptp-must-gather-rhel8:v<installed-version-ptp> Data collection for the PTP Operator. registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v<installed_version_GitOps> Data collection for Red Hat OpenShift GitOps. registry.redhat.io/openshift4/ose-secrets-store-csi-mustgather-rhel9:v<installed_version_secret_store> Data collection for the Secrets Store CSI Driver Operator. registry.redhat.io/lvms4/lvms-must-gather-rhel9:v<installed_version_LVMS> Data collection for the LVM Operator. registry.redhat.io/compliance/openshift-compliance-must-gather-rhel8:<digest-version> Data collection for the Compliance Operator. Note To determine the latest version for an OpenShift Container Platform component's image, see the OpenShift Operator Life Cycles web page on the Red Hat Customer Portal. Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) is installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command with one or more --image or --image-stream arguments. Note To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument. For information on gathering data about the Custom Metrics Autoscaler, see the Additional resources section that follows. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 2 1 The default OpenShift Container Platform must-gather image 2 The must-gather image for OpenShift Virtualization You can use the must-gather tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') Example 5.1. Example must-gather output for OpenShift Logging ├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├── ... Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=quay.io/kubevirt/must-gather 2 1 The default OpenShift Container Platform must-gather image 2 The must-gather image for KubeVirt Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. Additional resources Gathering debugging data for the Custom Metrics Autoscaler. Red Hat OpenShift Container Platform Life Cycle Policy 5.1.4. Gathering network logs You can gather network logs on all nodes in a cluster. Procedure Run the oc adm must-gather command with -- gather_network_logs : USD oc adm must-gather -- gather_network_logs Note By default, the must-gather tool collects the OVN nbdb and sbdb databases from all of the nodes in the cluster. Adding the -- gather_network_logs option to include additional logs that contain OVN-Kubernetes transactions for OVN nbdb database. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.1.5. Changing the must-gather storage limit When using the oc adm must-gather command to collect data the default maximum storage for the information is 30% of the storage capacity of the container. After the 30% limit is reached the container is killed and the gathering process stops. Information already gathered is downloaded to your local storage. To run the must-gather command again, you need either a container with more storage capacity or to adjust the maximum volume percentage. If the container reaches the storage limit, an error message similar to the following example is generated. Example output Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting... Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) is installed. Procedure Run the oc adm must-gather command with the volume-percentage flag. The new value cannot exceed 100. USD oc adm must-gather --volume-percentage <storage_percentage> 5.2. Obtaining your cluster ID When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI ( oc ). Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the web console or the OpenShift CLI ( oc ) installed. Procedure To open a support case and have your cluster ID autofilled using the web console: From the toolbar, navigate to (?) Help and select Share Feedback from the list. Click Open a support case from the Tell us about your experience window. To manually obtain your cluster ID using the web console: Navigate to Home Overview . The value is available in the Cluster ID field of the Details section. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' 5.3. About sosreport sosreport is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems. sosreport provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis. In some support interactions, Red Hat Support may ask you to collect a sosreport archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of oc adm must-gather . 5.4. Generating a sosreport archive for an OpenShift Container Platform cluster node The recommended way to generate a sosreport for an OpenShift Container Platform 4.17 cluster node is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have SSH access to your hosts. You have installed the OpenShift CLI ( oc ). You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node To enter into a debug session on the target node that is tainted with the NoExecute effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace: USD oc new-project dummy USD oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}' USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins. Collect a sosreport archive. Run the sos report command to collect necessary troubleshooting data on crio and podman : # sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1 1 -k enables you to define sosreport plugin parameters outside of the defaults. Optional: To include information on OVN-Kubernetes networking configurations from a node in your report, run the following command: # sos report --all-logs Press Enter when prompted, to continue. Provide the Red Hat Support case ID. sosreport adds the ID to the archive's file name. The sosreport output provides the archive's location and checksum. The following sample output references support case ID 01234567 : Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e 1 The sosreport archive's file path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the sosreport archive to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a sosreport archive from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a sosreport archive from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.5. Querying bootstrap node journal logs If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node. Prerequisites You have SSH access to your bootstrap node. You have the fully qualified domain name of the bootstrap node. Procedure Query bootkube.service journald unit logs from a bootstrap node during OpenShift Container Platform installation. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done' 5.6. Querying cluster node journal logs You can gather journald unit logs and other logs within /var/log on individual cluster nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Your API service is still functional. You have SSH access to your hosts. Procedure Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only: USD oc adm node-logs --role=master -u kubelet 1 1 Replace kubelet as appropriate to query other unit logs. Collect logs from specific subdirectories under /var/log/ on cluster nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 5.7. Network trace methods Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues. OpenShift Container Platform supports two ways of performing a network trace. Review the following table and choose the method that meets your needs. Table 5.3. Supported methods of collecting a network trace Method Benefits and capabilities Collecting a host network trace You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met. You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue. Collecting a network trace from an OpenShift Container Platform node or container You perform a packet capture on one node or one container. You run the tcpdump command interactively, so you can control the duration of the packet capture. You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually. This method uses the cat command and shell redirection to copy the packet capture data from the node or container to the client machine. 5.8. Collecting a host network trace Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time. You can use a combination of the oc adm must-gather command and the registry.redhat.io/openshift4/network-tools-rhel8 container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues. The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes. The tcpdump command records the packet captures in the pods. When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine. Tip The sample command in the following procedure demonstrates performing a packet capture with the tcpdump command. However, you can run any command in the container image that is specified in the --image argument to gather troubleshooting information from multiple nodes at the same time. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run a packet capture from the host network on some nodes by running the following command: USD oc adm must-gather \ --dest-dir /tmp/captures \// <.> --source-dir '/tmp/tcpdump/' \// <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \// <.> --node-selector 'node-role.kubernetes.io/worker' \// <.> --host-network=true \// <.> --timeout 30s \// <.> -- \ tcpdump -i any \// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300 <.> The --dest-dir argument specifies that oc adm must-gather stores the packet captures in directories that are relative to /tmp/captures on the client machine. You can specify any writable directory. <.> When tcpdump is run in the debug pod that oc adm must-gather starts, the --source-dir argument specifies that the packet captures are temporarily stored in the /tmp/tcpdump directory on the pod. <.> The --image argument specifies a container image that includes the tcpdump command. <.> The --node-selector argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the --node-name argument instead to run the packet capture on a single node. If you omit both the --node-selector and the --node-name argument, the packet captures are performed on all nodes. <.> The --host-network=true argument is required so that the packet captures are performed on the network interfaces of the node. <.> The --timeout argument and value specify to run the debug pod for 30 seconds. If you do not specify the --timeout argument and a duration, the debug pod runs for 10 minutes. <.> The -i any argument for the tcpdump command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets. Review the packet capture files that oc adm must-gather transferred from the pods to your client machine: tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... │ └── 2022-01-13T19:31:30.pcap ├── ip-... └── timestamp 1 2 The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the --node-selector argument, then the directory level for the hostname is not present. 5.9. Collecting a network trace from an OpenShift Container Platform node or container When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have an existing Red Hat Support case ID. You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have SSH access to your hosts. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. From within the chroot environment console, obtain the node's interface names: # ip ad Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . To avoid tcpdump issues, remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container. Initiate a tcpdump session on the cluster node and redirect output to a capture file. This example uses ens5 as the interface name: USD tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . If a tcpdump capture is required for a specific container on the node, follow these steps. Determine the target container ID. The chroot host command precedes the crictl command in this step because the toolbox container mounts the host's root directory at /host : # chroot /host crictl ps Determine the container's process ID. In this example, the container ID is a7fe32346b120 : # chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}' Initiate a tcpdump session on the container and redirect output to a capture file. This example uses 49628 as the container's process ID and ens5 as the interface name. The nsenter command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container's process ID, the tcpdump command is run in the container's namespace from the host: # nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the tcpdump capture file to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a tcpdump capture file from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a tcpdump capture file from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.10. Providing diagnostic data to Red Hat Support When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. Procedure Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal. Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the oc debug node/<node_name> command and redirect the output to a file. The following example copies /host/var/tmp/my-diagnostic-data.tar.gz from a debug container to /var/tmp/my-diagnostic-data.tar.gz : USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy diagnostic files from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.11. About toolbox toolbox is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport . The primary purpose for a toolbox container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image. Installing packages to a toolbox container By default, running the toolbox command starts a container with the registry.redhat.io/rhel9/support-tools:latest image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages. Prerequisites You have accessed a node with the oc debug node/<node_name> command. You can access your system as a user with root privileges. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Start the toolbox container: # toolbox Install the additional package, such as wget : # dnf install -y <package_name> Starting an alternative image with toolbox By default, running the toolbox command starts a container with the registry.redhat.io/rhel9/support-tools:latest image. Note You can start an alternative image by creating a .toolboxrc file and specifying the image to run. However, running an older version of the support-tools image, such as registry.redhat.io/rhel8/support-tools:latest , is not supported on OpenShift Container Platform 4.17. Prerequisites You have accessed a node with the oc debug node/<node_name> command. You can access your system as a user with root privileges. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Optional: If you need to use an alternative image instead of the default image, create a .toolboxrc file in the home directory for the root user ID, and specify the image metadata: REGISTRY=quay.io 1 IMAGE=fedora/fedora:latest 2 TOOLBOX_NAME=toolbox-fedora-latest 3 1 Optional: Specify an alternative container registry. 2 Specify an alternative image to start. 3 Optional: Specify an alternative name for the toolbox container. Start a toolbox container by entering the following command: # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . To avoid issues with sosreport plugins, remove the running toolbox container with podman rm toolbox- and then spawn a new toolbox container. | [
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 2",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather -- gather_network_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting",
"oc adm must-gather --volume-percentage <storage_percentage>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc get nodes",
"oc debug node/my-cluster-node",
"oc new-project dummy",
"oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'",
"oc debug node/my-cluster-node",
"chroot /host",
"toolbox",
"sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1",
"sos report --all-logs",
"Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc adm must-gather --dest-dir /tmp/captures \\// <.> --source-dir '/tmp/tcpdump/' \\// <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\// <.> --node-selector 'node-role.kubernetes.io/worker' \\// <.> --host-network=true \\// <.> --timeout 30s \\// <.> -- tcpdump -i any \\// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300",
"tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:30.pcap ├── ip- └── timestamp",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"ip ad",
"toolbox",
"tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"chroot /host crictl ps",
"chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'",
"nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1",
"chroot /host",
"toolbox",
"dnf install -y <package_name>",
"chroot /host",
"REGISTRY=quay.io 1 IMAGE=fedora/fedora:latest 2 TOOLBOX_NAME=toolbox-fedora-latest 3",
"toolbox"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/support/gathering-cluster-data |
Part IV. Device Drivers | Part IV. Device Drivers This part provides a comprehensive listing of all device drivers that are new or have been updated in Red Hat Enterprise Linux 7.5. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/part-red_hat_enterprise_linux-7.5_release_notes-device_drivers |
Chapter 9. Revoking access to a ROSA cluster | Chapter 9. Revoking access to a ROSA cluster An identity provider (IDP) controls access to a Red Hat OpenShift Service on AWS (ROSA) cluster. To revoke access of a user to a cluster, you must configure that within the IDP that was set up for authentication. 9.1. Revoking administrator access using the ROSA CLI You can revoke the administrator access of users so that they can access the cluster without administrator privileges. To remove the administrator access for a user, you must revoke the dedicated-admin or cluster-admin privileges. You can revoke the administrator privileges using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , or using OpenShift Cluster Manager console. 9.1.1. Revoking dedicated-admin access using the ROSA CLI You can revoke access for a dedicated-admin user if you are the user who created the cluster, the organization administrator user, or the super administrator user. Prerequisites You have added an Identity Provider (IDP) to your cluster. You have the IDP user name for the user whose privileges you are revoking. You are logged in to the cluster. Procedure Enter the following command to revoke the dedicated-admin access of a user: USD rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> Enter the following command to verify that your user no longer has dedicated-admin access. The output does not list the revoked user. USD oc get groups dedicated-admins 9.1.2. Revoking cluster-admin access using the ROSA CLI Only the user who created the cluster can revoke access for cluster-admin users. Prerequisites You have added an Identity Provider (IDP) to your cluster. You have the IDP user name for the user whose privileges you are revoking. You are logged in to the cluster. Procedure Enter the following command to revoke the cluster-admin access of a user: USD rosa revoke user cluster-admins --user=myusername --cluster=mycluster Enter the following command to verify that the user no longer has cluster-admin access. The output does not list the revoked user. USD oc get groups cluster-admins 9.2. Revoking administrator access using OpenShift Cluster Manager console You can revoke the dedicated-admin or cluster-admin access of users through OpenShift Cluster Manager console. Users will be able to access the cluster without administrator privileges. Prerequisites You have added an Identity Provider (IDP) to your cluster. You have the IDP user name for the user whose privileges you are revoking. You are logged in to OpenShift Cluster Manager console using an OpenShift Cluster Manager account that you used to create the cluster, the organization administrator user, or the super administrator user. Procedure On the Cluster List tab of OpenShift Cluster Manager, select the name of your cluster to view the cluster details. Select Access control > Cluster Roles and Access . For the user that you want to remove, click the Options menu to the right of the user and group combination and click Delete . | [
"rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"oc get groups dedicated-admins",
"rosa revoke user cluster-admins --user=myusername --cluster=mycluster",
"oc get groups cluster-admins"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_classic_clusters/rosa-sts-deleting-access-cluster |
Chapter 3. Preparing for your AMQ Streams deployment | Chapter 3. Preparing for your AMQ Streams deployment This section shows how you prepare for a AMQ Streams deployment, describing: The prerequisites you need before you can deploy AMQ Streams How to download the AMQ Streams release artifacts to use in your deployment How to authenticate with the Red Hat registry for Kafka Connect Source-to-Image (S2I) builds (if required) How to push the AMQ Streams container images into your own registry (if required) How to set up admin roles for configuration of custom resources used in deployment Note To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs. 3.1. Deployment prerequisites To deploy AMQ Streams, make sure: An OpenShift 4.6 and later cluster is available AMQ Streams is based on AMQ Streams Strimzi 0.22.x. The oc command-line tool is installed and configured to connect to the running cluster. Note AMQ Streams supports some features that are specific to OpenShift, where such integration benefits OpenShift users and there is no equivalent implementation using standard OpenShift. 3.2. Downloading AMQ Streams release artifacts To install AMQ Streams, download and extract the release artifacts from the amq-streams- <version> -ocp-install-examples.zip file from the AMQ Streams download site . AMQ Streams release artifacts include sample YAML files to help you deploy the components of AMQ Streams to OpenShift, perform common operations, and configure your Kafka cluster. Use oc to deploy the Cluster Operator from the install/cluster-operator folder of the downloaded ZIP file. For more information about deploying and configuring the Cluster Operator, see Section 5.1.1, "Deploying the Cluster Operator" . In addition, if you want to use standalone installations of the Topic and User Operators with a Kafka cluster that is not managed by the AMQ Streams Cluster Operator, you can deploy them from the install/topic-operator and install/user-operator folders. Note Additionally, AMQ Streams container images are available through the Red Hat Ecosystem Catalog . However, we recommend that you use the YAML files provided to deploy AMQ Streams. 3.3. Authenticating with the container registry for Kafka Connect S2I You need to configure authentication with the Red Hat container registry ( registry.redhat.io ) before creating a container image using OpenShift builds and Source-to-Image (S2I) . The container registry is used to store AMQ Streams container images on the Red Hat Ecosystem Catalog . The Catalog contains a Kafka Connect builder image with S2I support. The OpenShift build pulls this builder image, together with your source code and binaries, and uses it to build the new container image. Note Authentication with the Red Hat container registry is only required if using Kafka Connect S2I. It is not required for the other AMQ Streams components. Prerequisites Cluster administrator access to an OpenShift Container Platform cluster. Login details for your Red Hat Customer Portal account. See Appendix A, Using your subscription . Procedure If needed, log in to your OpenShift cluster as an administrator: oc login --user system:admin --token=my-token --server=https://my-cluster.example.com:6443 Open the project that will contain the Kafka Connect S2I cluster: oc project CLUSTER-NAME Note You might have already deployed the Kafka Connect S2I cluster . Create a docker-registry secret using your Red Hat Customer Portal account, replacing PULL-SECRET-NAME with the secret name to create: oc create secret docker-registry PULL-SECRET-NAME \ --docker-server=registry.redhat.io \ --docker-username= CUSTOMER-PORTAL-USERNAME \ --docker-password= CUSTOMER-PORTAL-PASSWORD \ --docker-email= EMAIL-ADDRESS You should see the following output: secret/ PULL-SECRET-NAME created Important You must create this docker-registry secret in every OpenShift project that will authenticate to registry.redhat.io . Link the secret to your service account to use the secret for pulling images. The service account name must match the name that the OpenShift pod uses. oc secrets link SERVICE-ACCOUNT-NAME PULL-SECRET-NAME --for=pull For example, using the default service account and a secret named my-secret : oc secrets link default my-secret --for=pull Link the secret to the builder service account to use the secret for pushing and pulling build images: oc secrets link builder PULL-SECRET-NAME Note If you do not want to use your Red Hat username and password to create the pull secret, you can create an authentication token using a registry service account. Additional resources Section 5.2.3.3, "Creating a container image using OpenShift builds and Source-to-Image" Red Hat Container Registry authentication (Red Hat Knowledgebase) Registry Service Accounts on the Red Hat Customer Portal 3.4. Pushing container images to your own registry Container images for AMQ Streams are available in the Red Hat Ecosystem Catalog . The installation YAML files provided by AMQ Streams will pull the images directly from the Red Hat Ecosystem Catalog . If you do not have access to the Red Hat Ecosystem Catalog or want to use your own container repository: Pull all container images listed here Push them into your own registry Update the image names in the installation YAML files Note Each Kafka version supported for the release has a separate image. Container image Namespace/Repository Description Kafka registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.7.0 AMQ Streams image for running Kafka, including: Kafka Broker Kafka Connect / S2I Kafka Mirror Maker ZooKeeper TLS Sidecars Operator registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 AMQ Streams image for running the operators: Cluster Operator Topic Operator User Operator Kafka Initializer Kafka Bridge registry.redhat.io/amq7/amq-streams-bridge-rhel7:1.7.0 AMQ Streams image for running the AMQ Streams Kafka Bridge 3.5. Designating AMQ Streams administrators AMQ Streams provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. AMQ Streams provides two cluster roles that you can use to assign these rights to other users: strimzi-view allows users to view and list AMQ Streams resources. strimzi-admin allows users to also create, edit or delete AMQ Streams resources. When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view aggregates to the view role, and strimzi-admin aggregates to the edit and admin roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights. The following procedure shows how to assign a strimzi-admin role that allows non-cluster administrators to manage AMQ Streams resources. A system administrator can designate AMQ Streams administrators after the Cluster Operator is deployed. Prerequisites The AMQ Streams Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator . Procedure Create the strimzi-view and strimzi-admin cluster roles in OpenShift. oc create -f install/strimzi-admin If needed, assign the roles that provide access rights to users that require them. oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2 | [
"login --user system:admin --token=my-token --server=https://my-cluster.example.com:6443",
"project CLUSTER-NAME",
"create secret docker-registry PULL-SECRET-NAME --docker-server=registry.redhat.io --docker-username= CUSTOMER-PORTAL-USERNAME --docker-password= CUSTOMER-PORTAL-PASSWORD --docker-email= EMAIL-ADDRESS",
"secret/ PULL-SECRET-NAME created",
"secrets link SERVICE-ACCOUNT-NAME PULL-SECRET-NAME --for=pull",
"secrets link default my-secret --for=pull",
"secrets link builder PULL-SECRET-NAME",
"create -f install/strimzi-admin",
"create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-tasks-prereqs_str |
Chapter 25. Kernel | Chapter 25. Kernel kernel component, BZ#1019091 The following RAID controller cards are no longer supported. However, the aacraid driver still detects them. Thus, they are marked as not supported in the dmesg output. PERC 2/Si (Iguana/PERC2Si) PERC 3/Di (Opal/PERC3Di) PERC 3/Si (SlimFast/PERC3Si) PERC 3/Di (Iguana FlipChip/PERC3DiF) PERC 3/Di (Viper/PERC3DiV) PERC 3/Di (Lexus/PERC3DiL) PERC 3/Di (Jaguar/PERC3DiJ) PERC 3/Di (Dagger/PERC3DiD) PERC 3/Di (Boxster/PERC3DiB) Adaptec 2120S (Crusader) Adaptec 2200S (Vulcan) Adaptec 2200S (Vulcan-2m) Legend S220 (Legend Crusader) Legend S230 (Legend Vulcan) Adaptec 3230S (Harrier) Adaptec 3240S (Tornado) ASR-2020ZCR SCSI PCI-X ZCR (Skyhawk) ASR-2025ZCR SCSI SO-DIMM PCI-X ZCR (Terminator) ASR-2230S + ASR-2230SLP PCI-X (Lancer) ASR-2130S (Lancer) AAR-2820SA (Intruder) AAR-2620SA (Intruder) AAR-2420SA (Intruder) ICP9024RO (Lancer) ICP9014RO (Lancer) ICP9047MA (Lancer) ICP9087MA (Lancer) ICP5445AU (Hurricane44) ICP9085LI (Marauder-X) ICP5085BR (Marauder-E) ICP9067MA (Intruder-6) Themisto Jupiter Platform Callisto Jupiter Platform ASR-2020SA SATA PCI-X ZCR (Skyhawk) ASR-2025SA SATA SO-DIMM PCI-X ZCR (Terminator) AAR-2410SA PCI SATA 4ch (Jaguar II) CERC SATA RAID 2 PCI SATA 6ch (DellCorsair) AAR-2810SA PCI SATA 8ch (Corsair-8) AAR-21610SA PCI SATA 16ch (Corsair-16) ESD SO-DIMM PCI-X SATA ZCR (Prowler) AAR-2610SA PCI SATA 6ch ASR-2240S (SabreExpress) ASR-4005 ASR-4800SAS (Marauder-X) ASR-4805SAS (Marauder-E) ASR-3800 (Hurricane44) Adaptec 5400S (Mustang) Dell PERC2/QC HP NetRAID-4M The following cards detected by aacraid are also no longer supported but they are not identified as not supported in the dmesg output: IBM 8i (AvonPark) IBM 8i (AvonPark Lite) IBM 8k/8k-l8 (Aurora) IBM 8k/8k-l4 (Aurora Lite) Warning Note that the Kdump mechanism might not work properly on the aforementioned RAID controllers. kernel component, BZ#1061210 When the hpsa_allow_any option is used, the hpsa driver allows the use of PCI IDs that are not listed in the driver's pci-id table. Thus, cards detected when this option is used, are not supported in Red Hat Enterprise Linux 7. kernel component, BZ#975791 The following cciss controllers are no longer supported: Smart Array 5300 Smart Array 5i Smart Array 532 Smart Array 5312 Smart Array 641 Smart Array 642 Smart Array 6400 Smart Array 6400 EM Smart Array 6i Smart Array P600 Smart Array P800 Smart Array P400 Smart Array P400i Smart Array E200i Smart Array E200 Smart Array E500 Smart Array P700M kernel component, BZ# 1055089 The systemd service does not spawn the getty tool on the /dev/hvc0/ virtio console if the virtio console driver is not found before loading kernel modules at system startup. As a consequence, a TTY terminal does not start automatically after the system boot when the system is running as a KVM guest. To work around this problem, start getty on /dev/hvc0/ after the system boot. The ISA serial device, which is used more commonly, works as expected. kernel component, BZ#1060565 A previously applied patch is causing a memory leak when creating symbolic links over NFS. Consequently, if creating a very large number of symbolic links, on a scale of hundreds of thousands, the system may report the out of memory status. kernel component, BZ#1097468 The Linux kernel Non-Uniform Memory Access (NUMA) balancing does not always work correctly in Red Hat Enterprise Linux 7. As a consequence, when the numa_balancing parameter is set, some of the memory can move to an arbitrary non-destination node before moving to the constrained nodes, and the memory on the destination node also decreases under certain circumstances. There is currently no known workaround available. kernel component, BZ#915855 The QLogic 1G iSCSI Adapter present in the system can cause a call trace error when the qla4xx driver is sharing the interrupt line with the USB sub-system. This error has no impact on the system functionality. The error can be found in the kernel log messages located in the /var/log/messages file. To prevent the call trace from logging into the kernel log messages, add the nousb kernel parameter when the system is booting. system-config-kdump component, BZ#1077470 In the Kernel Dump Configuration window, selecting the Raw device option in the Target settings tab does not work. To work around this problem, edit the kdump.conf file manually. kernel component, BZ#1087796 An attempt to remove the bnx2x module while the bnx2fc driver is processing a corrupted frame causes a kernel panic. To work around this problem, shut down any active FCoE interfaces before executing the modprobe -r bnx2x command. kexec-tools component, BZ#1089788 Due to a wrong buffer size calculation in the makedumpfile utility, an OOM error could occur with a high probability. As a consequence, the vmcore file cannot be captured under certain circumstances. No workaround is currently available. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/known-issues-kernel |
Part VI. Monitoring and Automation | Part VI. Monitoring and Automation This part describes various tools that allow system administrators to monitor system performance, automate system tasks, and report bugs. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/part-monitoring_and_automation |
function::MKDEV | function::MKDEV Name function::MKDEV - Creates a value that can be compared to a kernel device number (kdev_t) Synopsis Arguments major Intended major device number. minor Intended minor device number. | [
"MKDEV:long(major:long,minor:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-mkdev |
2.3. Fencing | 2.3. Fencing You must configure each GFS node in your Red Hat cluster for at least one form of fencing. Fencing is configured and managed in Red Hat Cluster Suite. For more information about fencing options, refer to Configuring and Managing a Red Hat Cluster . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-sysreq-iofence |
Chapter 5. Multi-site configuration and administration | Chapter 5. Multi-site configuration and administration As a storage administrator, you can configure and administer multiple Ceph Object Gateways for a variety of use cases. You can learn what to do during a disaster recovery and failover events. Also, you can learn more about realms, zones, and syncing policies in multi-site Ceph Object Gateway environments. A single zone configuration typically consists of one zone group containing one zone and one or more ceph-radosgw instances where you may load-balance gateway client requests between the instances. In a single zone configuration, typically multiple gateway instances point to a single Ceph storage cluster. However, Red Hat supports several multi-site configuration options for the Ceph Object Gateway: Multi-zone: A more advanced configuration consists of one zone group and multiple zones, each zone with one or more ceph-radosgw instances. Each zone is backed by its own Ceph Storage Cluster. Multiple zones in a zone group provides disaster recovery for the zone group should one of the zones experience a significant failure. Each zone is active and may receive write operations. In addition to disaster recovery, multiple active zones may also serve as a foundation for content delivery networks. Multi-zone-group: Formerly called 'regions', the Ceph Object Gateway can also support multiple zone groups, each zone group with one or more zones. Objects stored to zone groups within the same realm share a global namespace, ensuring unique object IDs across zone groups and zones. Multiple Realms: The Ceph Object Gateway supports the notion of realms, which can be a single zone group or multiple zone groups and a globally unique namespace for the realm. Multiple realms provides the ability to support numerous configurations and namespaces. Warning If you have a Red Hat Ceph Storage 6 cluster with multi-site configured, do not upgrade to the latest version of 6.1.z1 as there are issues with data corruption on encrypted objects when objects replicate to the disaster recovery (DR) site. Prerequisites A healthy running Red Hat Ceph Storage cluster. Deployment of the Ceph Object Gateway software. 5.1. Requirements and Assumptions A multi-site configuration requires at least two Ceph storage clusters, and At least two Ceph object gateway instances, one for each Ceph storage cluster. This guide assumes at least two Ceph storage clusters in geographically separate locations; however, the configuration can work on the same physical site. This guide also assumes four Ceph object gateway servers named rgw1 , rgw2 , rgw3 and rgw4 respectively. A multi-site configuration requires a master zone group and a master zone. Additionally, each zone group requires a master zone. Zone groups might have one or more secondary or non-master zones. Important When planning network considerations for multi-site, it is important to understand the relation bandwidth and latency observed on the multi-site synchronization network and the clients ingest rate in direct correlation with the current sync state of the objects owed to the secondary site. The network link between Red Hat Ceph Storage multi-site clusters must be able to handle the ingest into the primary cluster to maintain an effective recovery time on the secondary site. Multi-site synchronization is asynchronous and one of the limitations is the rate at which the sync gateways can process data across the link. An example to look at in terms of network inter-connectivity speed could be 1 GbE or inter-datacenter connectivity, for every 8 TB or cumulative receive data, per client gateway. Thus, if you replicate to two other sites, and ingest 16 TB a day, you need 6 GbE of dedicated bandwidth for multi-site replication. Red Hat also recommends private Ethernet or Dense wavelength-division multiplexing (DWDM) as a VPN over the internet is not ideal due to the additional overhead incurred. Important The master zone within the master zone group of a realm is responsible for storing the master copy of the realm's metadata, including users, quotas and buckets (created by the radosgw-admin CLI). This metadata gets synchronized to secondary zones and secondary zone groups automatically. Metadata operations executed with the radosgw-admin CLI MUST be executed on a host within the master zone of the master zone group in order to ensure that they get synchronized to the secondary zone groups and zones. Currently, it is possible to execute metadata operations on secondary zones and zone groups, but it is NOT recommended because they WILL NOT be synchronized, leading to fragmented metadata. In the following examples, the rgw1 host will serve as the master zone of the master zone group; the rgw2 host will serve as the secondary zone of the master zone group; the rgw3 host will serve as the master zone of the secondary zone group; and the rgw4 host will serve as the secondary zone of the secondary zone group. Important When you have a large cluster with more Ceph Object Gateways configured in a multi-site storage cluster, Red Hat recommends to dedicate not more than three sync-enabled Ceph Object Gateways per site for multi-site synchronization. If there are more than three syncing Ceph Object Gateways, it has diminishing returns sync rate in terms of performance and the increased contention creates an incremental risk for hitting timing-related error conditions. This is due to a sync-fairness known issue BZ#1740782 . For the rest of the Ceph Object Gateways in such a configuration, which are dedicated for client I/O operations through load balancers, run the ceph config set client.rgw. CLIENT_NODE rgw_run_sync_thread false command to prevent them from performing sync operations, and then restart the Ceph Object Gateway. Following is a typical configuration file for HAProxy for syncing gateways: Example 5.2. Pools Red Hat recommends using the Ceph Placement Group's per Pool Calculator to calculate a suitable number of placement groups for the pools the radosgw daemon will create. Set the calculated values as defaults in the Ceph configuration database. Example Note Making this change to the Ceph configuration will use those defaults when the Ceph Object Gateway instance creates the pools. Alternatively, you can create the pools manually. Pool names particular to a zone follow the naming convention ZONE_NAME . POOL_NAME . For example, a zone named us-east will have the following pools: .rgw.root us-east.rgw.control us-east.rgw.meta us-east.rgw.log us-east.rgw.buckets.index us-east.rgw.buckets.data us-east.rgw.buckets.non-ec us-east.rgw.meta:users.keys us-east.rgw.meta:users.email us-east.rgw.meta:users.swift us-east.rgw.meta:users.uid Additional Resources See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for details on creating pools. 5.3. Migrating a single site system to multi-site To migrate from a single site system with a default zone group and zone to a multi-site system, use the following steps: Create a realm. Replace REALM_NAME with the realm name. Syntax Rename the default zone and zonegroup. Replace NEW_ZONE_GROUP_NAME and NEW_ZONE_NAME with the zonegroup and zone name respectively. Syntax Rename the default zonegroup's api_name . Replace NEW_ZONE_GROUP_NAME with the zonegroup name. The api_name field in the zonegroup map refers to the name of the RADOS API used for data replication across different zones. This field helps clients interact with the correct APIs for accessing and managing data within the Ceph storage cluster. Syntax Configure the primary zonegroup. Replace NEW_ZONE_GROUP_NAME with the zonegroup name and REALM_NAME with realm name. Replace ENDPOINT with the fully qualified domain names in the zonegroup. Syntax Configure the primary zone. Replace REALM_NAME with realm name, NEW_ZONE_GROUP_NAME with the zonegroup name, NEW_ZONE_NAME with the zone name, and ENDPOINT with the fully qualified domain names in the zonegroup. Syntax Create a system user. Replace USER_ID with the username. Replace DISPLAY_NAME with a display name. It can contain spaces. Syntax Commit the updated configuration: Example Restart the Ceph Object Gateway: Example 5.4. Establishing a secondary zone Zones within a zone group replicate all data to ensure that each zone has the same data. When creating the secondary zone, issue ALL of the radosgw-admin zone operations on a host identified to serve the secondary zone. Note To add a additional zones, follow the same procedures as for adding the secondary zone. Use a different zone name. Important Run the metadata operations, such as user creation and quotas, on a host within the master zone of the master zonegroup. The master zone and the secondary zone can receive bucket operations from the RESTful APIs, but the secondary zone redirects bucket operations to the master zone. If the master zone is down, bucket operations will fail. If you create a bucket using the radosgw-admin CLI, you must run it on a host within the master zone of the master zone group so that the buckets will synchronize with other zone groups and zones. Bucket creation for a particular user is not supported, even if you create a user in the secondary zone with --yes-i-really-mean-it . Prerequisites At least two running Red Hat Ceph Storage clusters. At least two Ceph Object Gateway instances, one for each Red Hat Ceph Storage cluster. Root-level access to all the nodes. Nodes or containers are added to the storage cluster. All Ceph Manager, Monitor, and OSD daemons are deployed. Procedure Log into the cephadm shell: Example Pull the primary realm configuration from the host: Syntax Example Pull the primary period configuration from the host: Syntax Example Configure a secondary zone: Note All zones run in an active-active configuration by default; that is, a gateway client might write data to any zone and the zone will replicate the data to all other zones within the zone group. If the secondary zone should not accept write operations, specify the `--read-only flag to create an active-passive configuration between the master zone and the secondary zone. Additionally, provide the access_key and secret_key of the generated system user stored in the master zone of the master zone group. Syntax Example Optional: Delete the default zone: Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. Example Update the Ceph configuration database: Syntax Example Commit the changes: Syntax Example Outside the cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example 5.5. Configuring the archive zone (Technology Preview) Important Archive zone is a Technology Preview feature only for Red Hat Ceph Storage 7.0. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. Note Ensure you have a realm before configuring a zone as an archive. Without a realm, you cannot archive data through an archive zone for default zone/zonegroups. Archive Object data residing on Red Hat Ceph Storage using the Object Storage Archive Zone Feature. The archive zone uses multi-site replication and S3 object versioning feature in the Ceph Object Gateway. The archive zone retains all version of all the objects available, even when deleted in the production file. The archive zone has a history of versions of S3 objects that can only be eliminated through the gateways that are associated with the archive zone. It captures all the data updates and metadata to consolidate them as versions of S3 objects. Bucket granular replication to the archive zone can be used after creating an archive zone. You can control the storage space usage of an archive zone through the bucket Lifecycle policies, where you can define the number of versions you would like to keep for an object. An archive zone helps protect your data against logical or physical errors. It can save users from logical failures, such as accidentally deleting a bucket in the production zone. It can also save your data from massive hardware failures, like a complete production site failure. Additionally, it provides an immutable copy, which can help build a ransomware protection strategy. To implement the bucket granular replication, use the sync policies commands for enabling and disabling policies. See Creating a sync policy group and Modifying a sync policy group for more information. Note Using the sync policy group procedures is optional and only necessary to use enabling and disabling with bucket granular replication. For using the archive zone without bucket granular replication, it is not necessary to use the sync policy procedures. If you want to migrate the storage cluster from single site, see Migrating a single site system to multi-site . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor node. Installation of the Ceph Object Gateway software. Procedure During new zone creation, use the archive tier to configure the archive zone. Syntax Example Additional resources See the Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator section in the Red Hat Ceph Storage Object Gateway Guide for more details. 5.5.1. Deleting objects in archive zone You can use an S3 lifecycle policy extension to delete objects within an <ArchiveZone> element. Important Archive zone objects can only be deleted using the expiration lifecycle policy rule. If any <Rule> section contains an <ArchiveZone> element, that rule executes in archive zone and are the ONLY rules which run in an archive zone. Rules marked <ArchiveZone> do NOT execute in non-archive zones. The rules within the lifecycle policy determine when and what objects to delete. For more information about lifecycle creation and management, see Bucket lifecycle . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor node. Installation of the Ceph Object Gateway software. Procedure Set the <ArchiveZone> lifecycle policy rule. For more information about creating a lifecycle policy, see See the Creating a lifecycle management policy section in the Red Hat Ceph Storage Object Gateway Guide for more details. Example Optional: See if a specific lifecycle policy contains an archive zone rule. Syntax Example 1 1 The archive zone rule. This is an example of a lifecycle policy with an archive zone rule. If the Ceph Object Gateway user is deleted, the buckets at the archive site owned by that user is inaccessible. Link those buckets to another Ceph Object Gateway user to access the data. Syntax Example Additional resources See the Bucket lifecycle section in the Red Hat Ceph Storage Object Gateway Guide for more details. See the S3 bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for more details. 5.6. Failover and disaster recovery If the primary zone fails, failover to the secondary zone for disaster recovery. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor node. Installation of the Ceph Object Gateway software. Procedure Make the secondary zone the primary and default zone. For example: Syntax By default, Ceph Object Gateway runs in an active-active configuration. If the cluster was configured to run in an active-passive configuration, the secondary zone is a read-only zone. Remove the --read-only status to allow the zone to receive write operations. For example: Syntax Update the period to make the changes take effect: Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example If the former primary zone recovers, revert the operation. From the recovered zone, pull the realm from the current primary zone: Syntax Make the recovered zone the primary and default zone: Syntax Update the period to make the changes take effect: Example Restart the Ceph Object Gateway in the recovered zone: Syntax Example If the secondary zone needs to be a read-only configuration, update the secondary zone: Syntax Update the period to make the changes take effect: Example Restart the Ceph Object Gateway in the secondary zone: Syntax Example 5.7. Configuring multiple realms in the same storage cluster You can configure multiple realms in the same storage cluster. This is a more advanced use case for multi-site. Configuring multiple realms in the same storage cluster enables you to use a local realm to handle local Ceph Object Gateway client traffic, as well as a replicated realm for data that will be replicated to a secondary site. Note Red Hat recommends that each realm has its own Ceph Object Gateway. Prerequisites Two running Red Hat Ceph Storage data centers in a storage cluster. The access key and secret key for each data center in the storage cluster. Root-level access to all the Ceph Object Gateway nodes. Each data center has its own local realm. They share a realm that replicates on both sites. Procedure Create one local realm on the first data center in the storage cluster: Syntax Example Create one local master zonegroup on the first data center: Syntax Example Create one local zone on the first data center: Syntax Example Commit the period: Example You can either deploy the Ceph Object Gateway daemons with the appropriate realm and zone or update the configuration database: Deploy the Ceph Object Gateway using placement specification: Syntax Example Update the Ceph configuration database: Syntax Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Create one local realm on the second data center in the storage cluster: Syntax Example Create one local master zonegroup on the second data center: Syntax Example Create one local zone on the second data center: Syntax Example Commit the period: Example You can either deploy the Ceph Object Gateway daemons with the appropriate realm and zone or update the configuration database: Deploy the Ceph Object Gateway using placement specification: Syntax Example Update the Ceph configuration database: Syntax Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Create a replicated realm on the first data center in the storage cluster: Syntax Example Use the --default flag to make the replicated realm default on the primary site. Create a master zonegroup for the first data center: Syntax Example Create a master zone on the first data center: Syntax Example Create a synchronization user and add the system user to the master zone for multi-site: Syntax Example Commit the period: Example You can either deploy the Ceph Object Gateway daemons with the appropriate realm and zone or update the configuration database: Deploy the Ceph Object Gateway using placement specification: Syntax Example Update the Ceph configuration database: Syntax Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Pull the replicated realm on the second data center: Syntax Example Pull the period from the first data center: Syntax Example Create the secondary zone on the second data center: Syntax Example Commit the period: Example You can either deploy the Ceph Object Gateway daemons with the appropriate realm and zone or update the configuration database: Deploy the Ceph Object Gateway using placement specification: Syntax Example Update the Ceph configuration database: Syntax Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Log in as root on the endpoint for the second data center. Verify the synchronization status on the master realm: Syntax Example Log in as root on the endpoint for the first data center. Verify the synchronization status for the replication-synchronization realm: Syntax Example To store and access data in the local site, create the user for local realm: Syntax Example Important By default, users are created under the default realm. For the users to access data in the local realm, the radosgw-admin command requires the --rgw-realm argument. 5.8. Using multi-site sync policies As a storage administrator, you can use multi-site sync policies at the bucket level to control data movement between buckets in different zones. These policies are called bucket-granularity sync policies. Previously, all buckets within zones were treated symmetrically. This means that each zone contained a mirror copy of a given bucket, and the copies of buckets were identical in all of the zones. The sync process assumed that the bucket sync source and the bucket sync destination referred to the same bucket. Important Bucket sync policies apply to data only, and metadata is synced across all the zones in the multi-site irrespective of the presence of the the bucket sync policies. Objects that were created, modified, or deleted when the bucket sync policy was in allowed or forbidden place, it does not automatically sync when policy takes effect. Run the bucket sync run command to sync these objects. Important If there are multiple sync policies defined at zonegroup level, only one policy can be in enabled state at any time. We can toggle between policies if needed The sync policy supersedes the old zone group coarse configuration ( sync_from* ). The sync policy can be configured at the zone group level. If it is configured, it replaces the old-style configuration at the zone group level, but it can also be configured at the bucket level. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor node. Installation of the Ceph Object Gateway software. 5.8.1. Multi-site sync policy group state In the sync policy, multiple groups that can contain lists of data-flow configurations can be defined, as well as lists of pipe configurations. The data-flow defines the flow of data between the different zones. It can define symmetrical data flow, in which multiple zones sync data from each other, and it can define directional data flow, in which the data moves in one way from one zone to another. A pipe defines the actual buckets that can use these data flows, and the properties that are associated with it, such as the source object prefix. A sync policy group can be in 3 states: enabled - sync is allowed and enabled. allowed - sync is allowed. forbidden - sync, as defined by this group, is not allowed. When the zones replicate, you can disable replication for specific buckets using the sync policy. The following are the semantics that need to be followed to resolve the policy conflicts: Zonegroup Bucket Result enabled enabled enabled enabled allowed enabled enabled forbidden disabled allowed enabled enabled allowed allowed disabled allowed forbidden disabled forbidden enabled disabled forbidden allowed disabled forbidden forbidden disabled For multiple group polices that are set to reflect for any sync pair ( SOURCE_ZONE , SOURCE_BUCKET ), ( DESTINATION_ZONE , DESTINATION_BUCKET ), the following rules are applied in the following order: Even if one sync policy is forbidden , the sync is disabled . At least one policy should be enabled for the sync to be allowed . Sync states in this group can override other groups. A policy can be defined at the bucket level. A bucket level sync policy inherits the data flow of the zonegroup policy, and can only define a subset of what the zonegroup allows. A wildcard zone, and a wildcard bucket parameter in the policy defines all relevant zones, or all relevant buckets. In the context of a bucket policy, it means the current bucket instance. A disaster recovery configuration where entire zones are mirrored does not require configuring anything on the buckets. However, for a fine grained bucket sync it would be better to configure the pipes to be synced by allowing ( status=allowed ) them at the zonegroup level (for example, by using wildcard). However, enable the specific sync at the bucket level ( status=enabled ) only. If needed, the policy at the bucket level can limit the data movement to specific relevant zones. Important Any changes to the zonegroup policy need to be applied on the zonegroup master zone, and require period update and commit. Changes to the bucket policy need to be applied on the zonegroup master zone. Ceph Object Gateway handles these changes dynamically. 5.8.2. Retrieving the current policy You can use the get command to retrieve the current zonegroup sync policy, or a specific bucket policy. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Retrieve the current zonegroup sync policy or bucket policy. To retrieve a specific bucket policy, use the --bucket option: Syntax Example 5.8.3. Creating a sync policy group You can create a sync policy group for the current zone group, or for a specific bucket. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Create a sync policy group or a bucket policy. To create a bucket policy, use the --bucket option: Syntax Example 5.8.4. Modifying a sync policy group You can modify an existing sync policy group for the current zone group, or for a specific bucket. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Modify the sync policy group or a bucket policy. To modify a bucket policy, use the --bucket option: Syntax Example 5.8.5. Showing a sync policy group You can use the group get command to show the current sync policy group by group ID, or to show a specific bucket policy. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Show the current sync policy group or bucket policy. To show a specific bucket policy, use the --bucket option: Syntax Example 5.8.6. Removing a sync policy group You can use the group remove command to remove the current sync policy group by group ID, or to remove a specific bucket policy. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Remove the current sync policy group or bucket policy. To remove a specific bucket policy, use the --bucket option: Syntax Example 5.8.7. Creating a sync flow You can create two different types of flows for a sync policy group or for a specific bucket: Directional sync flow Symmetrical sync flow The group flow create command creates a sync flow. If you issue the group flow create command for a sync policy group or bucket that already has a sync flow, the command overwrites the existing settings for the sync flow and applies the settings you specify. Option Description Required/Optional --bucket Name of the bucket to which the sync policy needs to be configured. Used only in bucket-level sync policy. Optional --group-id ID of the sync group. Required --flow-id ID of the flow. Required --flow-type Types of flows for a sync policy group or for a specific bucket - directional or symmetrical. Required --source-zone To specify the source zone from which sync should happen. Zone that send data to the sync group. Required if flow type of sync group is directional. Optional --dest-zone To specify the destination zone to which sync should happen. Zone that receive data from the sync group. Required if flow type of sync group is directional. Optional --zones Zones that part of the sync group. Zones mention will be both sender and receiver zone. Specify zones separated by ",". Required if flow type of sync group is symmetrical. Optional Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Create or update a directional sync flow. To create or update directional sync flow for a specific bucket, use the --bucket option. Syntax Create or update a symmetrical sync flow. To specify multiple zones for a symmetrical flow type, use a comma-separated list for the --zones option. Syntax 5.8.8. Removing sync flows and zones The group flow remove command removes sync flows or zones from a sync policy group or bucket. For sync policy groups or buckets using directional flows, group flow remove command removes the flow. For sync policy groups or buckets using symmetrical flows, you can use the group flow remove command to remove specified zones from the flow, or to remove the flow. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Remove a directional sync flow. To remove the directional sync flow for a specific bucket, use the --bucket option. Syntax Remove specific zones from a symmetrical sync flow. To remove multiple zones from a symmetrical flow, use a comma-separated list for the --zones option. Syntax Remove a symmetrical sync flow. To remove the sync flow from a bucket, use the --bucket option. Syntax 5.8.9. Creating or modifying a sync group pipe As a storage administrator, you can define pipes to specify which buckets can use your configured data flows and the properties associated with those data flows. The sync group pipe create command enables you to create pipes, which are custom sync group data flows between specific buckets or groups of buckets, or between specific zones or groups of zones. This command uses the following options: Option Description Required/Optional --bucket Name of the bucket to which sync policy need to be configured. Used only in bucket-level sync policy. Optional --group-id ID of the sync group Required --pipe-id ID of the pipe Required --source-zones Zones that send data to the sync group. Use single quotes (') for value. Use commas to separate multiple zones. Use the wildcard * for all zones that match the data flow rules. Required --source-bucket Bucket or buckets that send data to the sync group. If bucket name is not mentioned, then * (wildcard) is taken as the default value. At bucket-level, source bucket will be the bucket for which the sync group created and at zonegroup-level, source bucket will be all buckets. Optional --source-bucket-id ID of the source bucket. Optional --dest-zones Zone or zones that receive the sync data. Use single quotes (') for value. Use commas to separate multiple zones. Use the wildcard * for all zones that match the data flow rules. Required --dest-bucket Bucket or buckets that receive the sync data. If bucket name is not mentioned, then * (wildcard) is taken as the default value. At bucket-level, destination bucket will be the bucket for which the sync group is created and at zonegroup-level, destination bucket will be all buckets Optional --dest-bucket-id ID of the destination bucket. Optional --prefix Bucket prefix. Use the wildcard * to filter for source objects. Optional --prefix-rm Do not use bucket prefix for filtering. Optional --tags-add Comma-separated list of key=value pairs. Optional --tags-rm Removes one or more key=value pairs of tags. Optional --dest-owner Destination owner of the objects from source. Optional --storage-class Destination storage class for the objects from source. Optional --mode Use system for system mode or user for user mode. Optional --uid Used for permissions validation in user mode. Specifies the user ID under which the sync operation will be issued. Optional Note To enable or disable sync at zonegroup level for certain buckets, set zonegroup level sync policy to enable or disable state respectively, and create a pipe for each bucket with --source-bucket and --dest-bucket with its bucket name or with bucket-id , i.e, --source-bucket-id and --dest-bucket-id . Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Create the sync group pipe: Syntax 5.8.10. Modifying or deleting a sync group pipe As a storage administrator, you can use the sync group pipe remove command to modify the sync group pipe by removing certain options. You can also use this command to remove the sync group pipe completely. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Modify the sync group pipe options. Syntax Delete a sync group pipe. Syntax 5.8.11. Obtaining information about sync operations The sync info command enables you to get information about the expected sync sources and targets, as defined by the sync policy. When you create a sync policy for a bucket, that policy defines how data moves from that bucket toward a different bucket in a different zone. Creating the policy also creates a list of bucket dependencies that are used as hints whenever that bucket syncs with another bucket. Note that a bucket can refer to another bucket without actually syncing to it, since syncing depends on whether the data flow allows the sync to take place. Both the --bucket and effective-zone-name parameters are optional. If you invoke the sync info command without specifying any options, the Object Gateway returns all of the sync operations defined by the sync policy in all zones. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. A group sync policy is defined. Procedure Get information about sync operations: Syntax 5.8.12. Using bucket granular sync policies When the sync policy of bucket or zonegroup, moves from disabled to enabled state, the below behavioural changes are observed: Important With the agreed limited scope for the GA release of the bucket granular sync policy in 6.1, the following features are now supported: Greenfield deployment : This release only supports new multi-site deployments. To set up bucket granular sync replication, a new zonegroup/zone must be configured at a minimum. Please note that this release does not allow migrating deployed/running RGW multi-site replication configurations to the newly featured RGW bucket sync policy replication. Data flow - symmetrical : Although both unidirectional and bi-directional/symmetrical replication can be configured, only symmetrical replication flows are supported in this release. 1-to-1 bucket replication : Currently, only replication between buckets with identical names is supported. This means that, if the bucket on site A is named bucket1 , it can only be replicated to bucket1 on site B. Replicating from bucket1 to bucket2 in a different zone is not currently supported. Important The following features are not supported: Source filters Storage class Destination owner translation User mode Normal scenario : Zonegroup level: Data written when the sync policy is disabled catches up as soon as it's enabled , with no additional steps. Bucket level: Data written when the sync policy is disabled does not catch up when the policy is enabled . In this case, either one of the below two workarounds can be applied: Writing new data to the bucket re-synchronizes the old data. Executing bucket sync run command syncs all the old data. Note When you want to toggle from the sync policy to the legacy policy, you need to first run the sync init command followed by the radosgw-admin bucket sync run command to sync all the objects. Reshard scenario : Zonegroup level: Any reshard that happens when the policy is disabled does not affect the sync when it's enabled later. Bucket level: If any bucket is resharded when the policy is disabled , sync gets stuck after the policy is enabled again. New objects also do not sync at this point. In this case, follow the below workaround: Run bucket sync run command. Note When the policy is set to enabled for the zonegroup and the policy is set to enabled or allowed for the bucket, the pipe configuration takes effect from zonegroup level and not at the bucket level. This is a known issue BZ#2240719 . Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Zonegroup Sync Bi-directional policy Zonegroup sync policies are created with the new sync policy engine. Any change to the zonegroup sync policy requires a period update and a commit. In the below example, a group policy is created and a data flow is defined for the movement of data from one zonegroup to another. In addition to that, a pipe for the zonegroups is configured to define the buckets that can use this data flow. The system in the below examples include 3 zones: us-east (the master zone), us-west , and us-west-2 . Procedure Create a new sync group with the status set to allowed . Example Note Until a fully configured zonegroup replication policy is created, it is recommended to set the --status to allowed , to prevent the replication from starting. Create a flow policy for the newly created group with the --flow-type set as symmetrical to enable bi-directional replication. Example Create a new pipe called pipe . Example Note Use the * wildcard for zones to include all zones set in the flow policy, and * for buckets to replicate all existing buckets in the zones. After configuring the bucket sync policy, set the --status to enabled . Example Update and commit the new period. Example Note Updating and committing the period is mandatory for a zonegroup policy. Optional: Execute sync info --bucket= bucket_name command to check the sync source, and destination for a specific bucket. All buckets in zones us-east and us-west replicates bi-directionally. Example The id field in the above output reflects the pipe rule that generated that entry. A single rule can generate multiple sync entries as seen in the below example. Optional: Run the sync info command to retrieve information about the expected bucket sync sources and targets, as defined in the policy. Bucket Sync Bi-directional policy The data flow for the bucket-level policy is inherited from the zonegroup policy. The data flow and pipes need not be changed for the bucket-level policy, as the bucket-level policy flow and pipes are only be a subset of the flow defined in the zonegroup policy. Note A bucket-level policy can enable pipes that are not enabled, except forbidden , at the zonegroup policy. Bucket-level policies do not require period updates. Procedure Set the zonegroup policy --status to allowed to permit per-bucket replication. Example Update the period after modifying the zonegroup policy. Example Create a sync group for the bucket we want to synchronize to and set --status to enabled . Example Create a pipe for the group that was created in the step. Example Note Use wildcards * to specify the source and destination zones for the bucket replication. Optional: To retrieve information about the expected bucket sync sources and targets, as defined by the sync policy, run the radosgw-admin bucket sync info command with the --bucket flag. Example Optional: To retrieve information about the expected sync sources and targets, as defined by the sync policy, run the radosgw-admin sync info command with the --bucket flag. Example Disable a policy along with sync info In certain cases, to interrupt the replication between two buckets, set the group policy for the bucket to be forbidden . Procedure Run the sync group modify command to change the status from allowed to forbidden to interrupt replication of the bucket between zones us-east and us-west . Example Note No update and commit for the period is required as this is a bucket sync policy. Optional: Run sync info command command to check the status of the sync for bucket buck . Example Note There are no source and destination targets as the replication has been interrupted. 5.9. Multi-site Ceph Object Gateway command line usage As a storage administrator, you can have a good understanding of how to use the Ceph Object Gateway in a multi-site environment. You can learn how to better manage the realms, zone groups, and zones in a multi-site environment. Prerequisites A running Red Hat Ceph Storage. Deployment of the Ceph Object Gateway software. Access to a Ceph Object Gateway node or container. 5.9.1. Realms A realm represents a globally unique namespace consisting of one or more zonegroups containing one or more zones, and zones containing buckets, which in turn contain objects. A realm enables the Ceph Object Gateway to support multiple namespaces and their configuration on the same hardware. A realm contains the notion of periods. Each period represents the state of the zone group and zone configuration in time. Each time you make a change to a zonegroup or zone, update the period and commit it. Red Hat recommends creating realms for new clusters. 5.9.1.1. Creating a realm To create a realm, issue the realm create command and specify the realm name. If the realm is the default, specify --default . Syntax Example By specifying --default , the realm will be called implicitly with each radosgw-admin call unless --rgw-realm and the realm name are explicitly provided. 5.9.1.2. Making a Realm the Default One realm in the list of realms should be the default realm. There may be only one default realm. If there is only one realm and it wasn't specified as the default realm when it was created, make it the default realm. Alternatively, to change which realm is the default, run the following command: Note When the realm is default, the command line assumes --rgw-realm= REALM_NAME as an argument. 5.9.1.3. Deleting a Realm To delete a realm, run the realm delete command and specify the realm name. Syntax Example 5.9.1.4. Getting a realm To get a realm, run the realm get command and specify the realm name. Syntax Example The CLI will echo a JSON object with the realm properties. Use > and an output file name to output the JSON object to a file. 5.9.1.5. Setting a realm To set a realm, run the realm set command, specify the realm name, and --infile= with an input file name. Syntax Example 5.9.1.6. Listing realms To list realms, run the realm list command: Example 5.9.1.7. Listing Realm Periods To list realm periods, run the realm list-periods command. Example 5.9.1.8. Pulling a Realm To pull a realm from the node containing the master zone group and master zone to a node containing a secondary zone group or zone, run the realm pull command on the node that will receive the realm configuration. Syntax 5.9.1.9. Renaming a Realm A realm is not part of the period. Consequently, renaming the realm is only applied locally, and will not get pulled with realm pull . When renaming a realm with multiple zones, run the command on each zone. To rename a realm, run the following command: Syntax Note Do NOT use realm set to change the name parameter. That changes the internal name only. Specifying --rgw-realm would still use the old realm name. 5.9.2. Zone Groups The Ceph Object Gateway supports multi-site deployments and a global namespace by using the notion of zone groups. Formerly called a region, a zone group defines the geographic location of one or more Ceph Object Gateway instances within one or more zones. Configuring zone groups differs from typical configuration procedures, because not all of the settings end up in a Ceph configuration file. You can list zone groups, get a zone group configuration, and set a zone group configuration. Note The radosgw-admin zonegroup operations can be performed on any node within the realm, because the step of updating the period propagates the changes throughout the cluster. However, radosgw-admin zone operations MUST be performed on a host within the zone. 5.9.2.1. Creating a Zone Group Creating a zone group consists of specifying the zone group name. Creating a zone assumes it will live in the default realm unless --rgw-realm= REALM_NAME is specified. If the zonegroup is the default zonegroup, specify the --default flag. If the zonegroup is the master zonegroup, specify the --master flag. Syntax Note Use zonegroup modify --rgw-zonegroup= ZONE_GROUP_NAME to modify an existing zone group's settings. 5.9.2.2. Making a Zone Group the Default One zonegroup in the list of zonegroups should be the default zonegroup. There may be only one default zonegroup. If there is only one zonegroup and it wasn't specified as the default zonegroup when it was created, make it the default zonegroup. Alternatively, to change which zonegroup is the default, run the following command: Example Note When the zonegroup is the default, the command line assumes --rgw-zonegroup= ZONE_GROUP_NAME as an argument. Then, update the period: 5.9.2.3. Adding a Zone to a Zone Group To add a zone to a zonegroup, you MUST run this command on a host that will be in the zone. To add a zone to a zonegroup, run the following command: Syntax Then, update the period: Example 5.9.2.4. Removing a Zone from a Zone Group To remove a zone from a zonegroup, run the following command: Syntax Then, update the period: Example 5.9.2.5. Renaming a Zone Group To rename a zonegroup, run the following command: Syntax Then, update the period: Example 5.9.2.6. Deleting a Zone group To delete a zonegroup, run the following command: Syntax Then, update the period: Example 5.9.2.7. Listing Zone Groups A Ceph cluster contains a list of zone groups. To list the zone groups, run the following command: The radosgw-admin returns a JSON formatted list of zone groups. 5.9.2.8. Getting a Zone Group To view the configuration of a zone group, run the following command: Syntax The zone group configuration looks like this: 5.9.2.9. Setting a Zone Group Defining a zone group consists of creating a JSON object, specifying at least the required settings: name : The name of the zone group. Required. api_name : The API name for the zone group. Optional. is_master : Determines if the zone group is the master zone group. Required. Note: You can only have one master zone group. endpoints : A list of all the endpoints in the zone group. For example, you may use multiple domain names to refer to the same zone group. Remember to escape the forward slashes ( \/ ). You may also specify a port ( fqdn:port ) for each endpoint. Optional. hostnames : A list of all the hostnames in the zone group. For example, you may use multiple domain names to refer to the same zone group. Optional. The rgw dns name setting will automatically be included in this list. You should restart the gateway daemon(s) after changing this setting. master_zone : The master zone for the zone group. Optional. Uses the default zone if not specified. Note You can only have one master zone per zone group. zones : A list of all zones within the zone group. Each zone has a name (required), a list of endpoints (optional), and whether or not the gateway will log metadata and data operations (false by default). placement_targets : A list of placement targets (optional). Each placement target contains a name (required) for the placement target and a list of tags (optional) so that only users with the tag can use the placement target (i.e., the user's placement_tags field in the user info). default_placement : The default placement target for the object index and object data. Set to default-placement by default. You may also set a per-user default placement in the user info for each user. To set a zone group, create a JSON object consisting of the required fields, save the object to a file, for example, zonegroup.json ; then, run the following command: Example Where zonegroup.json is the JSON file you created. Important The default zone group is_master setting is true by default. If you create a new zone group and want to make it the master zone group, you must either set the default zone group is_master setting to false , or delete the default zone group. Finally, update the period: Example 5.9.2.10. Setting a Zone Group Map Setting a zone group map consists of creating a JSON object consisting of one or more zone groups, and setting the master_zonegroup for the cluster. Each zone group in the zone group map consists of a key/value pair, where the key setting is equivalent to the name setting for an individual zone group configuration, and the val is a JSON object consisting of an individual zone group configuration. You may only have one zone group with is_master equal to true , and it must be specified as the master_zonegroup at the end of the zone group map. The following JSON object is an example of a default zone group map. To set a zone group map, run the following command: Example Where zonegroupmap.json is the JSON file you created. Ensure that you have zones created for the ones specified in the zone group map. Finally, update the period. Example 5.9.3. Zones Ceph Object Gateway supports the notion of zones. A zone defines a logical group consisting of one or more Ceph Object Gateway instances. Configuring zones differs from typical configuration procedures, because not all of the settings end up in a Ceph configuration file. You can list zones, get a zone configuration, and set a zone configuration. Important All radosgw-admin zone operations MUST be issued on a host that operates or will operate within the zone. 5.9.3.1. Creating a Zone To create a zone, specify a zone name. If it is a master zone, specify the --master option. Only one zone in a zone group may be a master zone. To add the zone to a zonegroup, specify the --rgw-zonegroup option with the zonegroup name. Important Zones must be created on a Ceph Object Gateway node that will be within the zone. Syntax Then, update the period: Example 5.9.3.2. Deleting a zone To delete a zone, first remove it from the zonegroup. Procedure Remove the zone from the zonegroup: Syntax Update the period: Example Delete the zone: Important This procedure MUST be used on a host within the zone. Syntax Update the period: Example Important Do not delete a zone without removing it from a zone group first. Otherwise, updating the period will fail. If the pools for the deleted zone will not be used anywhere else, consider deleting the pools. Replace DELETED_ZONE_NAME in the example below with the deleted zone's name. Important Once Ceph deletes the zone pools, it deletes all of the data within them in an unrecoverable manner. Only delete the zone pools if Ceph clients no longer need the pool contents. Important In a multi-realm cluster, deleting the .rgw.root pool along with the zone pools will remove ALL the realm information for the cluster. Ensure that .rgw.root does not contain other active realms before deleting the .rgw.root pool. Syntax Important After deleting the pools, restart the RGW process. 5.9.3.3. Modifying a Zone To modify a zone, specify the zone name and the parameters you wish to modify. Important Zones should be modified on a Ceph Object Gateway node that will be within the zone. Syntax Then, update the period: Example 5.9.3.4. Listing Zones As root , to list the zones in a cluster, run the following command: Example 5.9.3.5. Getting a Zone As root , to get the configuration of a zone, run the following command: Syntax The default zone looks like this: 5.9.3.6. Setting a Zone Configuring a zone involves specifying a series of Ceph Object Gateway pools. For consistency, we recommend using a pool prefix that is the same as the zone name. See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for details on configuring pools. Important Zones should be set on a Ceph Object Gateway node that will be within the zone. To set a zone, create a JSON object consisting of the pools, save the object to a file, for example, zone.json ; then, run the following command, replacing ZONE_NAME with the name of the zone: Example Where zone.json is the JSON file you created. Then, as root , update the period: Example 5.9.3.7. Renaming a Zone To rename a zone, specify the zone name and the new zone name. Issue the following command on a host within the zone: Syntax Then, update the period: Example | [
"cat ./haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 7000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 30s timeout server 30s timeout http-keep-alive 10s timeout check 10s timeout client-fin 1s timeout server-fin 1s maxconn 6000 listen stats bind 0.0.0.0:1936 mode http log global maxconn 256 clitimeout 10m srvtimeout 10m contimeout 10m timeout queue 10m JTH start stats enable stats hide-version stats refresh 30s stats show-node ## stats auth admin:password stats uri /haproxy?stats stats admin if TRUE frontend main bind *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app maxconn 6000 backend static balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000 backend app balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000",
"ceph config set osd osd_pool_default_pg_num 50 ceph config set osd osd_pool_default_pgp_num 50",
"radosgw-admin realm create --rgw-realm REALM_NAME --default",
"radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name NEW_ZONE_GROUP_NAME radosgw-admin zone rename --rgw-zone default --zone-new-name NEW_ZONE_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME",
"radosgw-admin zonegroup modify --api-name NEW_ZONE_GROUP_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME",
"radosgw-admin zonegroup modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --endpoints http://ENDPOINT --master --default",
"radosgw-admin zone modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --rgw-zone NEW_ZONE_NAME --endpoints http://ENDPOINT --master --default",
"radosgw-admin user create --uid USER_ID --display-name DISPLAY_NAME --access-key ACCESS_KEY --secret SECRET_KEY --system",
"radosgw-admin period update --commit",
"systemctl restart ceph-radosgw@rgw.`hostname -s`",
"cephadm shell",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup=_ZONE_GROUP_NAME_ --rgw-zone=_SECONDARY_ZONE_NAME_ --endpoints=http://_RGW_SECONDARY_HOSTNAME_:_RGW_PRIMARY_PORT_NUMBER_1_ --access-key=_SYSTEM_ACCESS_KEY_ --secret=_SYSTEM_SECRET_KEY_ [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"radosgw-admin zone create --rgw-zonegroup={ ZONE_GROUP_NAME } --rgw-zone={ ZONE_NAME } --endpoints={http:// FQDN : PORT },{http:// FQDN : PORT } --tier-type=archive",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east --endpoints={http://example.com:8080} --tier-type=archive",
"<?xml version=\"1.0\" ?> <LifecycleConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Rule> <ID>delete-1-days-az</ID> <Filter> <Prefix></Prefix> <ArchiveZone /> 1 </Filter> <Status>Enabled</Status> <Expiration> <Days>1</Days> </Expiration> </Rule> </LifecycleConfiguration>",
"radosgw-admin lc get --bucket BUCKET_NAME",
"radosgw-admin lc get --bucket test-bkt { \"prefix_map\": { \"\": { \"status\": true, \"dm_expiration\": true, \"expiration\": 0, \"noncur_expiration\": 2, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"Rule 1\", \"rule\": { \"id\": \"Rule 1\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"2\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"\", \"obj_tags\": { \"tagset\": {} }, \"archivezone\": \"\" 1 }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": true } } ] }",
"radosgw-admin bucket link --uid NEW_USER_ID --bucket BUCKET_NAME --yes-i-really-mean-it",
"radosgw-admin bucket link --uid arcuser1 --bucket arc1-deleted-da473fbbaded232dc5d1e434675c1068 --yes-i-really-mean-it",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default --read-only=false",
"radosgw-admin period update --commit",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=ldc1 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=ldc1zg --endpoints=http://rgw1:80 --rgw-realm=ldc1 --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=ldc1 --zone=ldc1z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc1z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=ldc2 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=ldc2zg --endpoints=http://rgw2:80 --rgw-realm=ldc2 --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=ldc2zg --rgw-zone=ldc2z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=ldc2 --zone=ldc2z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc2 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc2zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc2z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REPLICATED_REALM_1 --default",
"radosgw-admin realm create --rgw-realm=rdc1 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=http://_RGW_NODE_NAME :80 --rgw-realm=_RGW_REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=rdc1zg --endpoints=http://rgw1:80 --rgw-realm=rdc1 --master --default",
"radosgw-admin zone create --rgw-zonegroup= RGW_ZONE_GROUP --rgw-zone=_MASTER_RGW_NODE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=rdc1zg --rgw-zone=rdc1z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin user create --uid=\" SYNCHRONIZATION_USER \" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone= RGW_ZONE --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin user create --uid=\"synchronization-user\" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone=rdc1zg --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=rdc1 --zone=rdc1z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc1z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin zone create --rgw-zone= RGW_ZONE --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=https://tower-osd4.cephtips.com --access-key=_ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin zone create --rgw-zone=rdc2z --rgw-zonegroup=rdc1zg --endpoints=https://tower-osd4.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=rdc1 --zone=rdc2z --placement=\"1 host04\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc2z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin sync status",
"radosgw-admin sync status realm 59762f08-470c-46de-b2b1-d92c50986e67 (ldc2) zonegroup 7cf8daf8-d279-4d5c-b73e-c7fd2af65197 (ldc2zg) zone 034ae8d3-ae0c-4e35-8760-134782cb4196 (ldc2z) metadata sync no sync (zone is master)",
"radosgw-admin sync status --rgw-realm RGW_REALM_NAME",
"radosgw-admin sync status --rgw-realm rdc1 realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source",
"radosgw-admin user create --uid=\" LOCAL_USER\" --display-name=\"Local user\" --rgw-realm=_REALM_NAME --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin user create --uid=\"local-user\" --display-name=\"Local user\" --rgw-realm=ldc1 --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z",
"radosgw-admin sync policy get --bucket= BUCKET_NAME",
"radosgw-admin sync policy get --bucket=mybucket",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden",
"radosgw-admin sync group create --group-id=mygroup1 --status=enabled",
"radosgw-admin sync group modify --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden",
"radosgw-admin sync group modify --group-id=mygroup1 --status=forbidden",
"radosgw-admin sync group get --bucket= BUCKET_NAME --group-id= GROUP_ID",
"radosgw-admin sync group get --group-id=mygroup",
"radosgw-admin sync group remove --bucket= BUCKET_NAME --group-id= GROUP_ID",
"radosgw-admin sync group remove --group-id=mygroup",
"radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE",
"radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME",
"radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE",
"radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME",
"radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME",
"radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET1 --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET1 --dest-bucket-id= DESTINATION_BUCKET_ID --prefix= SOURCE_PREFIX --prefix-rm --tags-add= KEY1=VALUE1 , KEY2=VALUE2 , ... --tags-rm= KEY1=VALUE1 , KEY2=VALUE2 , ... --dest-owner= OWNER_ID --storage-class= STORAGE_CLASS --mode= USER --uid= USER_ID",
"radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET1 --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET1 --dest-bucket-id=_DESTINATION_BUCKET-ID",
"radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID",
"radosgw-admin sync info --bucket= BUCKET_NAME --effective-zone-name= ZONE_NAME",
"radosgw-admin sync group create --group-id=group1 --status=allowed",
"radosgw-admin sync group flow create --group-id=group1 --flow-id=flow-mirror --flow-type=symmetrical --zones=us-east,us-west",
"radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='*' --source-bucket='*' --dest-zones='*' --dest-bucket='*'",
"radosgw-admin sync group modify --group-id=group1 --status=enabled",
"radosgw-admin period update --commit",
"radosgw-admin sync info --bucket=buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"params\": { } } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }",
"radosgw-admin sync info --bucket=buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }",
"radosgw-admin sync group modify --group-id=group1 --status=allowed",
"radosgw-admin period update --commit",
"radosgw-admin sync group create --bucket=buck --group-id=buck-default --status=enabled",
"radosgw-admin sync group pipe create --bucket=buck --group-id=buck-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*'",
"radosgw-admin bucket sync info --bucket buck realm 33157555-f387-44fc-b4b4-3f9c0b32cd66 (india) zonegroup 594f1f63-de6f-4e1e-90b6-105114d7ad55 (shared) zone ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5 (primary) bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1] source zone e0e75beb-4e28-45ff-8d48-9710de06dcd0 bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1]",
"radosgw-admin sync info --bucket buck { \"id\": \"pipe1\", \"source\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }",
"radosgw-admin sync group modify --group-id buck-default --status forbidden --bucket buck { \"groups\": [ { \"id\": \"buck-default\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"pipe1\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", } } ], \"status\": \"forbidden\" } ] }",
"radosgw-admin sync info --bucket buck { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } } Sync is disabled for bucket buck",
"radosgw-admin realm create --rgw-realm= REALM_NAME [--default]",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin realm default --rgw-realm=test_realm",
"radosgw-admin realm delete --rgw-realm= REALM_NAME",
"radosgw-admin realm delete --rgw-realm=test_realm",
"radosgw-admin realm get --rgw-realm= REALM_NAME",
"radosgw-admin realm get --rgw-realm=test_realm >filename.json",
"{ \"id\": \"0a68d52e-a19c-4e8e-b012-a8f831cb3ebc\", \"name\": \"test_realm\", \"current_period\": \"b0c5bbef-4337-4edd-8184-5aeab2ec413b\", \"epoch\": 1 }",
"radosgw-admin realm set --rgw-realm= REALM_NAME --infile= IN_FILENAME",
"radosgw-admin realm set --rgw-realm=test_realm --infile=filename.json",
"radosgw-admin realm list",
"radosgw-admin realm list-periods",
"radosgw-admin realm pull --url= URL_TO_MASTER_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin realm rename --rgw-realm= REALM_NAME --realm-new-name= NEW_REALM_NAME",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME [--rgw-realm= REALM_NAME ] [--master] [--default]",
"radosgw-admin zonegroup default --rgw-zonegroup=us",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup add --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup rename --rgw-zonegroup= ZONE_GROUP_NAME --zonegroup-new-name= NEW_ZONE_GROUP_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup delete --rgw-zonegroup= ZONE_GROUP_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup list",
"{ \"default_info\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"zonegroups\": [ \"us\" ] }",
"radosgw-admin zonegroup get [--rgw-zonegroup= ZONE_GROUP_NAME ]",
"{ \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" }",
"radosgw-admin zonegroup set --infile zonegroup.json",
"radosgw-admin period update --commit",
"{ \"zonegroups\": [ { \"key\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"val\": { \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" } } ], \"master_zonegroup\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 } }",
"radosgw-admin zonegroup-map set --infile zonegroupmap.json",
"radosgw-admin period update --commit",
"radosgw-admin zone create --rgw-zone= ZONE_NAME [--zonegroup= ZONE_GROUP_NAME ] [--endpoints= ENDPOINT_PORT [,<endpoint:port>] [--master] [--default] --access-key ACCESS_KEY --secret SECRET_KEY",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zone delete --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"ceph osd pool delete DELETED_ZONE_NAME .rgw.control DELETED_ZONE_NAME .rgw.control --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.data.root DELETED_ZONE_NAME .rgw.data.root --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.log DELETED_ZONE_NAME .rgw.log --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.users.uid DELETED_ZONE_NAME .rgw.users.uid --yes-i-really-really-mean-it",
"radosgw-admin zone modify [options] --access-key=<key> --secret/--secret-key=<key> --master --default --endpoints=<list>",
"radosgw-admin period update --commit",
"radosgw-admin zone list",
"radosgw-admin zone get [--rgw-zone= ZONE_NAME ]",
"{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\"}, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\"} } ] }",
"radosgw-admin zone set --rgw-zone=test-zone --infile zone.json",
"radosgw-admin period update --commit",
"radosgw-admin zone rename --rgw-zone= ZONE_NAME --zone-new-name= NEW_ZONE_NAME",
"radosgw-admin period update --commit"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/object_gateway_guide/multisite-configuration-and-administration |
Chapter 1. Introduction to OpenShift Data Foundation | Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/red_hat_openshift_data_foundation_architecture/introduction-to-openshift-data-foundation-4_rhodf |
Chapter 12. Building container images | Chapter 12. Building container images Building container images involves creating a blueprint for a containerized application. Blueprints rely on base images from other public repositories that define how the application should be installed and configured. Red Hat Quay supports the ability to build Docker and Podman container images. This functionality is valuable for developers and organizations who rely on container and container orchestration. 12.1. Build contexts When building an image with Docker or Podman, a directory is specified to become the build context . This is true for both manual Builds and Build triggers, because the Build that is created by Red Hat Quay is not different than running docker build or podman build on your local machine. Red Hat Quay Build contexts are always specified in the subdirectory from the Build setup, and fallback to the root of the Build source if a directory is not specified. When a build is triggered, Red Hat Quay Build workers clone the Git repository to the worker machine, and then enter the Build context before conducting a Build. For Builds based on .tar archives, Build workers extract the archive and enter the Build context. For example: Extracted Build archive example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile Imagine that the Extracted Build archive is the directory structure got a Github repository called example. If no subdirectory is specified in the Build trigger setup, or when manually starting the Build, the Build operates in the example directory. If a subdirectory is specified in the Build trigger setup, for example, subdir , only the Dockerfile within it is visible to the Build. This means that you cannot use the ADD command in the Dockerfile to add file , because it is outside of the Build context. Unlike Docker Hub, the Dockerfile is part of the Build context on Red Hat Quay. As a result, it must not appear in the .dockerignore file. 12.2. Tag naming for Build triggers Custom tags are available for use in Red Hat Quay. One option is to include any string of characters assigned as a tag for each built image. Alternatively, you can use the following tag templates on the Configure Tagging section of the build trigger to tag images with information from each commit: USD{commit} : Full SHA of the issued commit USD{parsed_ref.branch} : Branch information (if available) USD{parsed_ref.tag} : Tag information (if available) USD{parsed_ref.remote} : The remote name USD{commit_info.date} : Date when the commit was issued USD{commit_info.author.username} : Username of the author of the commit USD{commit_info.short_sha} : First 7 characters of the commit SHA USD{committer.properties.username} : Username of the committer This list is not complete, but does contain the most useful options for tagging purposes. You can find the complete tag template schema on this page . For more information, see Set up custom tag templates in build triggers for Red Hat Quay and Quay.io 12.3. Skipping a source control-triggered build To specify that a commit should be ignored by the Red Hat Quay build system, add the text [skip build] or [build skip] anywhere in your commit message. 12.4. Viewing and managing builds Repository Builds can be viewed and managed on the Red Hat Quay UI. Procedure Navigate to a Red Hat Quay repository using the UI. In the navigation pane, select Builds . 12.5. Creating a new Build Red Hat Quay can create new Builds so long as FEATURE_BUILD_SUPPORT is set to to true in their config.yaml file. Prerequisites You have navigated to the Builds page of your repository. FEATURE_BUILD_SUPPORT is set to to true in your config.yaml file. Procedure On the Builds page, click Start New Build . When prompted, click Upload Dockerfile to upload a Dockerfile or an archive that contains a Dockerfile at the root directory. Click Start Build . Note Currently, users cannot specify the Docker build context when manually starting a build. Currently, BitBucket is unsupported on the Red Hat Quay v2 UI. You are redirected to the Build, which can be viewed in real-time. Wait for the Dockerfile Build to be completed and pushed. Optional. you can click Download Logs to download the logs, or Copy Logs to copy the logs. Click the back button to return to the Repository Builds page, where you can view the Build History. 12.6. Build triggers Build triggers invoke builds whenever the triggered condition is met, for example, a source control push, creating a webhook call , and so on. 12.6.1. Creating a Build trigger Use the following procedure to create a Build trigger using a custom Git repository. Note The following procedure assumes that you have not included Github credentials in your config.yaml file. Prerequisites You have navigated to the Builds page of your repository. Procedure On the Builds page, click Create Build Trigger . Select the desired platform, for example, Github, BitBucket, Gitlab, or use a custom Git repository. For this example, we are using a custom Git repository from Github. Enter a custom Git repository name, for example, [email protected]:<username>/<repo>.git . Then, click . When prompted, configure the tagging options by selecting one of, or both of, the following options: Tag manifest with the branch or tag name . When selecting this option, the built manifest the name of the branch or tag for the git commit are tagged. Add latest tag if on default branch . When selecting this option, the built manifest with latest if the build occurred on the default branch for the repository are tagged. Optionally, you can add a custom tagging template. There are multiple tag templates that you can enter here, including using short SHA IDs, timestamps, author names, committer, and branch names from the commit as tags. For more information, see "Tag naming for Build triggers". After you have configured tagging, click . When prompted, select the location of the Dockerfile to be built when the trigger is invoked. If the Dockerfile is located at the root of the git repository and named Dockerfile, enter /Dockerfile as the Dockerfile path. Then, click . When prompted, select the context for the Docker build. If the Dockerfile is located at the root of the Git repository, enter / as the build context directory. Then, click . Optional. Choose an optional robot account. This allows you to pull a private base image during the build process. If you know that a private base image is not used, you can skip this step. Click . Check for any verification warnings. If necessary, fix the issues before clicking Finish . You are alerted that the trigger has been successfully activated. Note that using this trigger requires the following actions: You must give the following public key read access to the git repository. You must set your repository to POST to the following URL to trigger a build. Save the SSH Public Key, then click Return to <organization_name>/<repository_name> . You are redirected to the Builds page of your repository. On the Builds page, you now have a Build trigger. For example: 12.6.2. Manually triggering a Build Builds can be triggered manually by using the following procedure. Procedure On the Builds page, Start new build . When prompted, select Invoke Build Trigger . Click Run Trigger Now to manually start the process. After the build starts, you can see the Build ID on the Repository Builds page. 12.7. Setting up a custom Git trigger A custom Git trigger is a generic way for any Git server to act as a Build trigger. It relies solely on SSH keys and webhook endpoints. Everything else is left for the user to implement. 12.7.1. Creating a trigger Creating a custom Git trigger is similar to the creation of any other trigger, with the exception of the following: Red Hat Quay cannot automatically detect the proper Robot Account to use with the trigger. This must be done manually during the creation process. There are extra steps after the creation of the trigger that must be done. These steps are detailed in the following sections. 12.7.2. Custom trigger creation setup When creating a custom Git trigger, two additional steps are required: You must provide read access to the SSH public key that is generated when creating the trigger. You must setup a webhook that POSTs to the Red Hat Quay endpoint to trigger the build. The key and the URL are available by selecting View Credentials from the Settings , or gear icon. View and modify tags from your repository 12.7.2.1. SSH public key access Depending on the Git server configuration, there are multiple ways to install the SSH public key that Red Hat Quay generates for a custom Git trigger. For example, Git documentation describes a small server setup in which adding the key to USDHOME/.ssh/authorize_keys would provide access for Builders to clone the repository. For any git repository management software that is not officially supported, there is usually a location to input the key often labeled as Deploy Keys . 12.7.2.2. Webhook To automatically trigger a build, one must POST a .json payload to the webhook URL using the following format. This can be accomplished in various ways depending on the server setup, but for most cases can be done with a post-receive Git Hook . Note This request requires a Content-Type header containing application/json in order to be valid. Example webhook { "commit": "1c002dd", // required "ref": "refs/heads/master", // required "default_branch": "master", // required "commit_info": { // optional "url": "gitsoftware.com/repository/commits/1234567", // required "message": "initial commit", // required "date": "timestamp", // required "author": { // optional "username": "user", // required "avatar_url": "gravatar.com/user.png", // required "url": "gitsoftware.com/users/user" // required }, "committer": { // optional "username": "user", // required "avatar_url": "gravatar.com/user.png", // required "url": "gitsoftware.com/users/user" // required } } } | [
"example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile",
"{ \"commit\": \"1c002dd\", // required \"ref\": \"refs/heads/master\", // required \"default_branch\": \"master\", // required \"commit_info\": { // optional \"url\": \"gitsoftware.com/repository/commits/1234567\", // required \"message\": \"initial commit\", // required \"date\": \"timestamp\", // required \"author\": { // optional \"username\": \"user\", // required \"avatar_url\": \"gravatar.com/user.png\", // required \"url\": \"gitsoftware.com/users/user\" // required }, \"committer\": { // optional \"username\": \"user\", // required \"avatar_url\": \"gravatar.com/user.png\", // required \"url\": \"gitsoftware.com/users/user\" // required } } }"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/use_red_hat_quay/building-dockerfiles |
6.2. Mounting a btrfs file system | 6.2. Mounting a btrfs file system To mount any device in the btrfs file system use the following command: Other useful mount options include: device=/ dev / name Appending this option to the mount command tells btrfs to scan the named device for a btrfs volume. This is used to ensure the mount will succeed as attempting to mount devices that are not btrfs will cause the mount to fail. Note This does not mean all devices will be added to the file system, it only scans them. max_inline= number Use this option to set the maximum amount of space (in bytes) that can be used to inline data within a metadata B-tree leaf. The default is 8192 bytes. For 4k pages it is limited to 3900 bytes due to additional headers that need to fit into the leaf. alloc_start= number Use this option to set where in the disk allocations start. thread_pool= number Use this option to assign the number of worker threads allocated. discard Use this option to enable discard/TRIM on freed blocks. noacl Use this option to disable the use of ACL's. space_cache Use this option to store the free space data on disk to make caching a block group faster. This is a persistent change and is safe to boot into old kernels. nospace_cache Use this option to disable the above space_cache . clear_cache Use this option to clear all the free space caches during mount. This is a safe option but will trigger the space cache to be rebuilt. As such, leave the file system mounted in order to let the rebuild process finish. This mount option is intended to be used once and only after problems are apparent with the free space. enospc_debug This option is used to debug problems with "no space left". recovery Use this option to enable autorecovery upon mount. | [
"mount / dev / device / mount-point"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/btrfs-mount |
Chapter 2. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode | Chapter 2. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploying OpenShift Data Foundation on OpenShift Container Platform in internal mode using dynamic storage devices provided by Red Hat OpenStack Platform installer-provisioned infrastructure (IPI) enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.13 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Note Use of Vault namespaces are not supported with the Kubernetes authentication method in OpenShift Data Foundation 4.11. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to standard . Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that OpenShift Data Foundation is successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation cluster" . Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Table 2.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io 2.6. Uninstalling OpenShift Data Foundation 2.6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploying_openshift_data_foundation_on_red_hat_openstack_platform_in_internal_mode |
4.2. Physical Volume Administration | 4.2. Physical Volume Administration This section describes the commands that perform the various aspects of physical volume administration. 4.2.1. Creating Physical Volumes The following subsections describe the commands used for creating physical volumes. 4.2.1.1. Setting the Partition Type If you are using a whole disk device for your physical volume, the disk must have no partition table. For DOS disk partitions, the partition id should be set to 0x8e using the fdisk or cfdisk command or an equivalent. For whole disk devices only the partition table must be erased, which will effectively destroy all data on that disk. You can remove an existing partition table by zeroing the first sector with the following command: 4.2.1.2. Initializing Physical Volumes Use the pvcreate command to initialize a block device to be used as a physical volume. Initialization is analogous to formatting a file system. The following command initializes /dev/sdd1 , /dev/sde1 , and /dev/sdf1 for use as LVM physical volumes. To initialize partitions rather than whole disks: run the pvcreate command on the partition. The following example initializes /dev/hdb1 as an LVM physical volume for later use as part of an LVM logical volume. 4.2.1.3. Scanning for Block Devices You can scan for block devices that may be used as physical volumes with the lvmdiskscan command, as shown in the following example. | [
"dd if=/dev/zero of= PhysicalVolume bs=512 count=1",
"pvcreate /dev/sdd1 /dev/sde1 /dev/sdf1",
"pvcreate /dev/hdb1",
"lvmdiskscan /dev/ram0 [ 16.00 MB] /dev/sda [ 17.15 GB] /dev/root [ 13.69 GB] /dev/ram [ 16.00 MB] /dev/sda1 [ 17.14 GB] LVM physical volume /dev/VolGroup00/LogVol01 [ 512.00 MB] /dev/ram2 [ 16.00 MB] /dev/new_vg/lvol0 [ 52.00 MB] /dev/ram3 [ 16.00 MB] /dev/pkl_new_vg/sparkie_lv [ 7.14 GB] /dev/ram4 [ 16.00 MB] /dev/ram5 [ 16.00 MB] /dev/ram6 [ 16.00 MB] /dev/ram7 [ 16.00 MB] /dev/ram8 [ 16.00 MB] /dev/ram9 [ 16.00 MB] /dev/ram10 [ 16.00 MB] /dev/ram11 [ 16.00 MB] /dev/ram12 [ 16.00 MB] /dev/ram13 [ 16.00 MB] /dev/ram14 [ 16.00 MB] /dev/ram15 [ 16.00 MB] /dev/sdb [ 17.15 GB] /dev/sdb1 [ 17.14 GB] LVM physical volume /dev/sdc [ 17.15 GB] /dev/sdc1 [ 17.14 GB] LVM physical volume /dev/sdd [ 17.15 GB] /dev/sdd1 [ 17.14 GB] LVM physical volume 7 disks 17 partitions 0 LVM physical volume whole disks 4 LVM physical volumes"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/physvol_admin |
14.3.3.2. Primary Domain Controller (PDC) using LDAP | 14.3.3.2. Primary Domain Controller (PDC) using LDAP The most powerful and versatile implementation of a Samba PDC is its ability to have an LDAP password backend. LDAP is highly scalable. LDAP database servers can be used for redundancy and fail-over by replicating to a Samba BDC. Groups of LDAP PDCs and BDCs with load balancing are ideal for an enterprise environment. On the other hand, LDAP configurations are inherently complex to setup and maintain. If SSL is to be incorporated with LDAP, the complexity instantly multiplies. Even so, with careful and precise planning, LDAP is an ideal solution for enterprise environments. Note the passdb backend directive as well as specific LDAP suffix specifications. Although the Samba configuration for LDAP is straightforward, the installation of OpenLDAP is not trivial. LDAP should be installed and configured before any Samba configuration. Also notice that Samba and LDAP do not need to be on the same server to function. It is highly recommended to separate the two in an enterprise environment. Note Implementing LDAP in this smb.conf file assumes that a working LDAP server has been successfully installed on ldap.example.com . | [
"[global] workgroup = DOCS netbios name = DOCS_SRV passdb backend = ldapsam:ldap://ldap.example.com username map = /etc/samba/smbusers security = user add user script = /usr/sbin/useradd -m %u delete user script = /usr/sbin/userdel -r %u add group script = /usr/sbin/groupadd %g delete group script = /usr/sbin/groupdel %g add user to group script = /usr/sbin/usermod -G %g %u add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null -g machines %u The following specifies the default logon script Per user logon scripts can be specified in the user account using pdbedit logon script = scripts\\logon.bat This sets the default profile path. Set per user paths with pdbedit logon path = \\\\%L\\Profiles\\%U logon drive = H: logon home = \\\\%L\\%U domain logons = Yes os level = 35 preferred master = Yes domain master = Yes ldap suffix = dc=example,dc=com ldap machine suffix = ou=People ldap user suffix = ou=People ldap group suffix = ou=Group ldap idmap suffix = ou=People ldap admin dn = cn=Manager ldap ssl = no ldap passwd sync = yes idmap uid = 15000-20000 idmap gid = 15000-20000 Other resource shares"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/samba-PDC-LDAP |
Chapter 5. Using the web console for managing firewall | Chapter 5. Using the web console for managing firewall A firewall is a way to protect machines from any unwanted traffic from outside. It enables users to control incoming network traffic on host machines by defining a set of firewall rules. These rules are used to sort the incoming traffic and either block it or allow through. 5.1. Prerequisites The RHEL 7 web console configures the firewalld service. For details about the firewalld service, see firewalld . 5.2. Using the web console to run the firewall This section describes where and how to run the RHEL 7 system firewall in the web console. Note The web console configures the firewalld service. Procedure Log in to the web console. For details, see Logging in to the web console . Open the Networking section. In the Firewall section, click ON to run the firewall. If you do not see the Firewall box, log in to the web console with the administration privileges. At this stage, your firewall is running. To configure firewall rules, see Adding rules in the web console using the web console . 5.3. Using the web console to stop the firewall This section describes where and how to stop the RHEL 7 system firewall in the web console. Note The web console configures the firewalld service. Procedure Log in to the web console. For details, see Logging in to the web console . Open the Networking section. In the Firewall section, click OFF to stop it. If you do not see the Firewall box, log in to the web console with the administration privileges. At this stage, the firewall has been stopped and does not secure your system. 5.4. firewalld firewalld is a firewall service daemon that provides a dynamic customizable host-based firewall with a D-Bus interface. Being dynamic, it enables creating, changing, and deleting the rules without the necessity to restart the firewall daemon each time the rules are changed. firewalld uses the concepts of zones and services , that simplify the traffic management. Zones are predefined sets of rules. Network interfaces and sources can be assigned to a zone. The traffic allowed depends on the network your computer is connected to and the security level this network is assigned. Firewall services are predefined rules that cover all necessary settings to allow incoming traffic for a specific service and they apply within a zone. Services use one or more ports or addresses for network communication. Firewalls filter communication based on ports. To allow network traffic for a service, its ports must be open . firewalld blocks all traffic on ports that are not explicitly set as open. Some zones, such as trusted , allow all traffic by default. Additional resources firewalld(1) man page 5.5. Zones firewalld can be used to separate networks into different zones according to the level of trust that the user has decided to place on the interfaces and traffic within that network. A connection can only be part of one zone, but a zone can be used for many network connections. NetworkManager notifies firewalld of the zone of an interface. You can assign zones to interfaces with: NetworkManager firewall-config tool firewall-cmd command-line tool The RHEL web console The latter three can only edit the appropriate NetworkManager configuration files. If you change the zone of the interface using the web console, firewall-cmd or firewall-config , the request is forwarded to NetworkManager and is not handled by firewalld . The predefined zones are stored in the /usr/lib/firewalld/zones/ directory and can be instantly applied to any available network interface. These files are copied to the /etc/firewalld/zones/ directory only after they are modified. The default settings of the predefined zones are as follows: block Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6 . Only network connections initiated from within the system are possible. dmz For computers in your demilitarized zone that are publicly-accessible with limited access to your internal network. Only selected incoming connections are accepted. drop Any incoming network packets are dropped without any notification. Only outgoing network connections are possible. external For use on external networks with masquerading enabled, especially for routers. You do not trust the other computers on the network to not harm your computer. Only selected incoming connections are accepted. home For use at home when you mostly trust the other computers on the network. Only selected incoming connections are accepted. internal For use on internal networks when you mostly trust the other computers on the network. Only selected incoming connections are accepted. public For use in public areas where you do not trust other computers on the network. Only selected incoming connections are accepted. trusted All network connections are accepted. work For use at work where you mostly trust the other computers on the network. Only selected incoming connections are accepted. One of these zones is set as the default zone. When interface connections are added to NetworkManager , they are assigned to the default zone. On installation, the default zone in firewalld is set to be the public zone. The default zone can be changed. Note The network zone names have been chosen to be self-explanatory and to allow users to quickly make a reasonable decision. To avoid any security problems, review the default zone configuration and disable any unnecessary services according to your needs and risk assessments. Additional resources ` firewalld.zone(5) man page 5.6. Zones in the web console Important Firewall zones are new in RHEL 7.7.0. The Red Hat Enterprise Linux web console implements major features of the firewalld service and enables you to: Add predefined firewall zones to a particular interface or range of IP addresses Configure zones with selecting services into the list of enabled services Disable a service by removing this service from the list of enabled service Remove a zone from an interface 5.7. Enabling zones using the web console The web console enables you to apply predefined and existing firewall zones on a particular interface or a range of IP addresses. This section describes how to enable a zone on an interface. Prerequisites The web console has been installed. For details, see Installing the web console . The firewall must be enabled. For details, see Running the firewall in the web console . Procedure Log in to the RHEL web console with administration privileges. For details, see Logging in to the web console . Click Networking . Click on the Firewall box title. If you do not see the Firewall box, log in to the web console with the administrator privileges. In the Firewall section, click Add Services . Click on the Add Zone button. In the Add Zone dialog box, select a zone from the Trust level scale. You can see here all zones predefined in the firewalld service. In the Interfaces part, select an interface or interfaces on which the selected zone is applied. In the Allowed Addresses part, you can select whether the zone is applied on: the whole subnet or a range of IP addresses in the following format: 192.168.1.0 192.168.1.0/24 192.168.1.0/24, 192.168.1.0 Click on the Add zone button. Verify the configuration in Active zones . 5.8. Enabling services on the firewall using the web console By default, services are added to the default firewall zone. If you use more firewall zones on more network interfaces, you must select a zone first and then add the service with port. The web console displays predefined firewalld services and you can add them to active firewall zones. Important The web console configures the firewalld service. The web console does not allow generic firewalld rules which are not listed in the web console. Prerequisites The web console has been installed. For details, see Installing the web console . The firewall must be enabled. For details, see Running the firewall in the web console . Procedure Log in to the RHEL web console with administrator privileges. For details, see Logging in to the web console . Click Networking . Click on the Firewall box title. If you do not see the Firewall box, log in to the web console with the administrator privileges. In the Firewall section, click Add Services . In the Add Services dialog box, select a zone for which you want to add the service. The Add Services dialog box includes a list of active firewall zones only if the system includes multiple active zones. If the system uses just one (the default) zone, the dialog does not include zone settings. In the Add Services dialog box, find the service you want to enable on the firewall. Enable desired services. Click Add Services . At this point, the web console displays the service in the list of Allowed Services . 5.9. Configuring custom ports using the web console The web console allows you to add: Services listening on standard ports: Section 5.8, "Enabling services on the firewall using the web console" Services listening on custom ports. This section describes how to add services with custom ports configured. Prerequisites The web console has been installed. For details, see Installing the web console . The firewall must be enabled. For details, see Running the firewall in the web console . Procedure Log in to the RHEL web console with administrator privileges. For details, see Logging in to the web console . Click Networking . Click on the Firewall box title. If you do not see the Firewall box, log in to the web console with the administration privileges. In the Firewall section, click Add Services . In the Add Services dialog box, select a zone for which you want to add the service. The Add Services dialog box includes a list of active firewall zones only if the system includes multiple active zones. If the system uses just one (the default) zone, the dialog does not include zone settings. In the Add Ports dialog box, click on the Custom Ports radio button. In the TCP and UDP fields, add ports according to examples. You can add ports in the following formats: Port numbers such as 22 Range of port numbers such as 5900-5910 Aliases such as nfs, rsync Note You can add multiple values into each field. Values must be separated with the comma and without the space, for example: 8080,8081,http After adding the port number in the TCP and/or UDP fields, verify the service name in the Name field. The Name field displays the name of the service for which is this port reserved. You can rewrite the name if you are sure that this port is free to use and no server needs to communicate on this port. In the Name field, add a name for the service including defined ports. Click on the Add Ports button. To verify the settings, go to the Firewall page and find the service in the list of Allowed Services . 5.10. Disabling zones using the web console This section describes how to disable a firewall zone in your firewall configuration using the web console. Prerequisites The web console has been installed. For details, see Installing the web console . Procedure Log in to the RHEL web console with administrator privileges. For details, see Logging in to the web console . Click Networking . Click on the Firewall box title. If you do not see the Firewall box, log in to the web console with the administrator privileges. On the Active zones table, click on the Delete icon at the zone you want to remove. The zone is now disabled and the interface does not include opened services and ports which were configured in the zone. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/using-the-web-console-for-managing-firewall_system-management-using-the-RHEL-7-web-console |
Preface | Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) bare metal clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Both internal and external OpenShift Data Foundation clusters are supported on bare metal. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/preface-baremetal |
Chapter 61. Work item definitions | Chapter 61. Work item definitions Red Hat Process Automation Manager requires a work item definition (WID) file to identify the data fields to show in Business Central and accept API calls. The WID file is a mapping between user interactions with Red Hat Process Automation Manager and the data that is passed to the work item handler. The WID file also handles the UI details such as the name of the custom task, the category it is displayed as on the palette in Business Central, the icon used to designate the custom task, and the work item handler the custom task will map to. In Red Hat Process Automation Manager you can create a WID file in two ways: Use a @Wid annotation when coding the work item handler. Create a .wid text file. For example, definitions-example.wid . 61.1. @Wid Annotation The @Wid annotation is automatically created when you generate a work item handler project using the Maven archetype. You can also add the annotation manually. @Wid Example Table 61.1. @Wid parameter descriptions Description @Wid Top-level annotation to auto-generate WID files. widfile Name of the file that is automatically created for the custom task when it is deployed in Red Hat Process Automation Manager. name Name of the custom task, used internally. This name must be unique to custom tasks deployed in Red Hat Process Automation Manager. displayName Displayed name of the custom task. This name is displayed in the palette in Business Central. icon Path from src/main/resources/ to an icon located in the current project. The icon is displayed in the palette in Business Central. The icon, if specified, must be a PNG or GIF file and 16x16 pixels. This value can be left blank to use a default "Service Task" icon. description Description of the custom task. defaultHandler The work item handler Java class that is linked to the custom task. This entry is in the format <language> : <class> . Red Hat Process Automation Manager recommends using mvel as the language value for this attribute but java can also be used. For more information about mvel, see MVEL Documentation . documentation Path to an HTML file in the current project that contains a description of the custom task. @WidParameter Child annotation of @Wid . Specifies values that will be populated in the Business Central GUI or expected by API calls as data inputs for the custom task. More than one parameter can be specified: name - A name for the parameter. Note Due to the possibility of this name being used in API calls over transfer methods such as REST or SOAP, this name should not contain spaces or special characters. required - Boolean value indicating whether the parameter is required for the custom task to execute. @WidResult Child annotation of @Wid . Specifies values that will be populated in the Business Central GUI or expected by API calls as data outputs for the custom task. You can specify more than one result: name - A name for the result. Note Due to the possibility of this name being used in API calls over transfer methods such as REST or SOAP, this name should not contain spaces or special characters. @WidMavenDepends Child annotation of @Wid . Specifies Maven dependencies that will be required for the correct functioning of the work item handler. You can specify more than one dependency: group - Maven group ID of the dependency. artifact - Maven artifact ID of the dependency. version - Maven version number of the dependency. @WidService Child annotation of @Wid . Specifies values that will be populated in the service repository. category - The UI palette category that the handler will be placed. This value should match the category field of the @Wid annotation. description - Description of the handler that will be displayed in the service repository. keywords - Comma-separated list of keywords that apply to the handler. Note: Currently not used by the Business Central service repository. action - The @WidAction object. authinfo - The @WidAuth object. Optional. @WidAction Object of @WidService . title - The title for the handler action. description - The description for the handler action. @WidAuth Object of @WidService . required - The boolean value that determines whether authentication is required. params - The array containing the authentication parameters required. paramsdescription - The array containing the descriptions for each authentication parameter. referencesite - The URL to where the handler documentation can be found. Note: Currently not used by the Business Central service repository. 61.2. Text File A global WorkDefinitions WID text file is automatically generated by new projects when a business process is added. The WID text file is similar to the JSON format but is not a completely valid JSON file. You can open this file in Business Central. You can create additional WID files by selecting Add Asset > Work item definitions from an existing project. Text file example [ [ "name" : "MyWorkItemDefinitions", "displayName" : "MyWorkItemDefinitions", "category" : "", "description" : "", "defaultHandler" : "mvel: new com.redhat.MyWorkItemWorkItemHandler()", "documentation" : "myworkitem/index.html", "parameters" : [ "SampleParam" : new StringDataType(), "SampleParamTwo" : new StringDataType() ], "results" : [ "SampleResult" : new StringDataType() ], "mavenDependencies" : [ "com.redhat:myworkitem:7.52.0.Final-example-00007" ], "icon" : "" ] ] The file is structured as a plain-text file using a JSON-like structure. The filename extension is .wid . Table 61.2. Text file parameter descriptions Description name Name of the custom task, used internally. This name must be unique to custom tasks deployed in Red Hat Process Automation Manager. displayName Displayed name of the custom task. This name is displayed in the palette in Business Central. icon Path from src/main/resources/ to an icon located in the current project. The icon is displayed in the palette in Business Central. The icon, if specified, must be a PNG or GIF file and 16x16 pixels. This value can be left blank to use a default "Service Task" icon. category Name of a category within the Business Central palette under which this custom task is displayed. description Description of the custom task. defaultHandler The work item handler Java class that is linked to the custom task. This entry is in the format <language> : <class> . Red Hat Process Automation Manager recommends using mvel as the language value for this attribute but java can also be used. For more information about mvel, see MVEL Documentation . documentation Path to an HTML file in the current project that contains a description of the custom task. parameters Specifies the values to be populated in the Business Central GUI or expected by API calls as data inputs for the custom task. Parameters use the <key> : <DataType> format. Accepted data types are StringDataType() , IntegerDataType() , and ObjectDataType() . More than one parameter can be specified. results Specifies the values to be populated in the Business Central GUI or expected by API calls as data outputs for the custom task. Results use the <key> : <DataType> format. Accepted data types are StringDataType() , IntegerDataType() , and ObjectDataType() . More than one result can be specified. mavenDependencies Optional: Specifies Maven dependencies required for the correct functioning of the work item handler. Dependencies can also be specified in the work item handler pom.xml file. Dependencies are in the format <group>:<artifact>:<version> . More than one dependency may be specified Red Hat Process Automation Manager tries to locate a .wid file in two locations by default: Within Business Central in the project's top-level global/ directory. This is the location of the default WorkDefinitions.wid file that is created automatically when a project first adds a business process asset. Within Business Central in the project's src/main/resources/ directory. This is where WID files created within a project in Business Central will be placed. A WID file may be created at any level of a Java package, so a WID file created at a package location of <default> will be created directly inside src/main/resources/ while a WID file created at a package location of com.redhat will be created at src/main/resources/com/redhat/ Warning Red Hat Process Automation Manager does not validate that the value for the defaultHandler tag is executable or is a valid Java class. Specifying incorrect or invalid classes for this tag will return errors. | [
"@Wid(widfile=\"MyWorkItemDefinitions.wid\", name=\"MyWorkItemDefinitions\", displayName=\"MyWorkItemDefinitions\", icon=\"\", defaultHandler=\"mvel: new com.redhat.MyWorkItemWorkItemHandler()\", documentation = \"myworkitem/index.html\", parameters={ @WidParameter(name=\"SampleParam\", required = true), @WidParameter(name=\"SampleParamTwo\", required = true) }, results={ @WidResult(name=\"SampleResult\") }, mavenDepends={ @WidMavenDepends(group=\"com.redhat\", artifact=\"myworkitem\", version=\"7.52.0.Final-example-00007\") }, serviceInfo={ @WidService(category = \"myworkitem\", description = \"USD{description}\", keywords = \"\", action = @WidAction(title = \"Sample Title\"), authinfo = @WidAuth(required = true, params = {\"SampleParam\", \"SampleParamTwo\"}, paramsdescription = {\"SampleParam\", \"SampleParamTwo\"}, referencesite = \"referenceSiteURL\")) } )",
"[ [ \"name\" : \"MyWorkItemDefinitions\", \"displayName\" : \"MyWorkItemDefinitions\", \"category\" : \"\", \"description\" : \"\", \"defaultHandler\" : \"mvel: new com.redhat.MyWorkItemWorkItemHandler()\", \"documentation\" : \"myworkitem/index.html\", \"parameters\" : [ \"SampleParam\" : new StringDataType(), \"SampleParamTwo\" : new StringDataType() ], \"results\" : [ \"SampleResult\" : new StringDataType() ], \"mavenDependencies\" : [ \"com.redhat:myworkitem:7.52.0.Final-example-00007\" ], \"icon\" : \"\" ] ]"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/custom-tasks-work-item-definitions-con-custom-tasks |
Chapter 21. Establishing remote client connections | Chapter 21. Establishing remote client connections Connect to Data Grid clusters from the Data Grid Console, Command Line Interface (CLI), and remote clients. 21.1. Client connection details Client connections to Data Grid require the following information: Hostname Port Authentication credentials, if required TLS certificate, if you use encryption Hostnames The hostname you use depends on whether clients are running on the same OpenShift cluster as Data Grid. Client applications running on the same OpenShift cluster use the internal service name for the Data Grid cluster. metadata: name: infinispan Client applications running on a different OpenShift, or outside OpenShift, use a hostname that depends on how Data Grid is exposed on the network. A LoadBalancer service uses the URL for the load balancer. A NodePort service uses the node hostname. An Red Hat OpenShift Route uses either a custom hostname that you define or a hostname that the system generates. Ports Client connections on OpenShift and a through LoadBalancer service use port 11222 . NodePort services use a port in the range of 30000 to 60000 . Routes use either port 80 (unencrypted) or 443 (encrypted). Additional resources Configuring Network Access to Data Grid Retrieving Credentials Retrieving TLS Certificates 21.2. Connecting to Data Grid clusters with remote shells Start a remote shell session to Data Grid clusters and use the command line interface (CLI) to work with Data Grid resources and perform administrative operations. Prerequisites Have kubectl-infinispan on your PATH . Have valid Data Grid credentials. Procedure Run the infinispan shell command to connect to your Data Grid cluster. Note If you have access to authentication secrets and there is only one Data Grid user the kubectl-infinispan plugin automatically detects your credentials and authenticates to Data Grid. If your deployment has multiple Data Grid credentials, specify a user with the --username argument and enter the corresponding password when prompted. Perform CLI operations as required. Tip Press the tab key or use the --help argument to view available options and help text. Use the quit command to end the remote shell session. Additional resources Using the Data Grid Command Line Interface 21.3. Accessing Data Grid Console Access the console to create caches, perform adminstrative operations, and monitor your Data Grid clusters. Prerequisites Expose Data Grid on the network so you can access the console through a browser. For example, configure a LoadBalancer service or create a Route . Procedure Access the console from any browser at USDHOSTNAME:USDPORT . Replace USDHOSTNAME:USDPORT with the network location where Data Grid is available. Note The Data Grid Console should only be accessed via OpenShift services or an OpenShift Route exposing port 11222. 21.4. Hot Rod clients Hot Rod is a binary TCP protocol that Data Grid provides for high-performance data transfer capabilities with remote clients. Client intelligence The Hot Rod protocol includes a mechanism that provides clients with an up-to-date view of the cache topology. Client intelligence improves performance by reducing the number of network hops for read and write operations. Clients running in the same OpenShift cluster can access internal IP addresses for Data Grid pods so you can use any client intelligence. HASH_DISTRIBUTION_AWARE is the default intelligence mechanism and enables clients to route requests to primary owners, which provides the best performance for Hot Rod clients. Clients running on a different OpenShift, or outside OpenShift, can access Data Grid by using a LoadBalancer , NodePort , or OpenShift Route . Important Hot Rod client connections via OpenShift Route require encryption. You must configure TLS with SNI otherwise the Hot Rod connection fails. For unencrypted Hot Rod client connections, you must use a LoadBalancer service or a NodePort service. Hot Rod clients must use BASIC intelligence in the following situations: Connecting to Data Grid through a LoadBalancer service, a NodePort service, or an OpenShift Route . Failing over to a different OpenShift cluster when using cross-site replication. OpenShift cluster administrators can define network policies that restrict traffic to Data Grid. In some cases network isolation policies can require you to use BASIC intelligence even when clients are running in the same OpenShift cluster but a different namespace. 21.4.1. Hot Rod client configuration API You can programmatically configure Hot Rod client connections with the ConfigurationBuilder interface. Note Replace USDSERVICE_HOSTNAME in the following examples with the internal service name of your Data Grid cluster. metadata: name: infinispan On OpenShift ConfigurationBuilder import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.SaslQop; import org.infinispan.client.hotrod.impl.ConfigurationProperties; ... ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("USDHOSTNAME") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username("username") .password("changeme") .realm("default") .saslQop(SaslQop.AUTH) .saslMechanism("SCRAM-SHA-512") .ssl() .sniHostName("USDSERVICE_HOSTNAME") .trustStoreFileName("/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt") .trustStoreType("pem"); hotrod-client.properties # Connection infinispan.client.hotrod.server_list=USDHOSTNAME:USDPORT # Authentication infinispan.client.hotrod.use_auth=true infinispan.client.hotrod.auth_username=developer infinispan.client.hotrod.auth_password=USDPASSWORD infinispan.client.hotrod.auth_server_name=USDCLUSTER_NAME infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512 # Encryption infinispan.client.hotrod.sni_host_name=USDSERVICE_HOSTNAME infinispan.client.hotrod.trust_store_file_name=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt infinispan.client.hotrod.trust_store_type=pem Outside OpenShift ConfigurationBuilder import org.infinispan.client.hotrod.configuration.ClientIntelligence; import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.SaslQop; ... ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("USDHOSTNAME") .port("USDPORT") .security().authentication() .username("username") .password("changeme") .realm("default") .saslQop(SaslQop.AUTH) .saslMechanism("SCRAM-SHA-512") .ssl() .sniHostName("USDSERVICE_HOSTNAME") //Create a client trust store with tls.crt from your project. .trustStoreFileName("/path/to/truststore.pkcs12") .trustStorePassword("trust_store_password") .trustStoreType("PCKS12"); builder.clientIntelligence(ClientIntelligence.BASIC); hotrod-client.properties # Connection infinispan.client.hotrod.server_list=USDHOSTNAME:USDPORT # Client intelligence infinispan.client.hotrod.client_intelligence=BASIC # Authentication infinispan.client.hotrod.use_auth=true infinispan.client.hotrod.auth_username=developer infinispan.client.hotrod.auth_password=USDPASSWORD infinispan.client.hotrod.auth_server_name=USDCLUSTER_NAME infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512 # Encryption infinispan.client.hotrod.sni_host_name=USDSERVICE_HOSTNAME # Create a client trust store with tls.crt from your project. infinispan.client.hotrod.trust_store_file_name=/path/to/truststore.pkcs12 infinispan.client.hotrod.trust_store_password=trust_store_password infinispan.client.hotrod.trust_store_type=PCKS12 21.4.2. Configuring Hot Rod clients for certificate authentication If you enable client certificate authentication, clients must present valid certificates when negotiating connections with Data Grid. Validate strategy If you use the Validate strategy, you must configure clients with a keystore so they can present signed certificates. You must also configure clients with Data Grid credentials and any suitable authentication mechanism. Authenticate strategy If you use the Authenticate strategy, you must configure clients with a keystore that contains signed certificates and valid Data Grid credentials as part of the distinguished name (DN). Hot Rod clients must also use the EXTERNAL authentication mechanism. Note If you enable security authorization, you should assign the Common Name (CN) from the client certificate a role with the appropriate permissions. The following example shows a Hot Rod client configuration for client certificate authentication with the Authenticate strategy: import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ... ConfigurationBuilder builder = new ConfigurationBuilder(); builder.security() .authentication() .saslMechanism("EXTERNAL") .ssl() .keyStoreFileName("/path/to/keystore") .keyStorePassword("keystorepassword".toCharArray()) .keyStoreType("PCKS12"); 21.4.3. Creating caches from Hot Rod clients You can remotely create caches on Data Grid clusters running on OpenShift with Hot Rod clients. However, Data Grid recommends that you create caches using Data Grid Console, the CLI, or with Cache CRs instead of with Hot Rod clients. Programmatically creating caches The following example shows how to add cache configurations to the ConfigurationBuilder and then create them with the RemoteCacheManager : import org.infinispan.client.hotrod.DefaultTemplate; import org.infinispan.client.hotrod.RemoteCache; import org.infinispan.client.hotrod.RemoteCacheManager; ... builder.remoteCache("my-cache") .templateName(DefaultTemplate.DIST_SYNC); builder.remoteCache("another-cache") .configuration("<infinispan><cache-container><distributed-cache name=\"another-cache\"><encoding media-type=\"application/x-protostream\"/></distributed-cache></cache-container></infinispan>"); try (RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build())) { // Get a remote cache that does not exist. // Rather than return null, create the cache from a template. RemoteCache<String, String> cache = cacheManager.getCache("my-cache"); // Store a value. cache.put("hello", "world"); // Retrieve the value and print it. System.out.printf("key = %s\n", cache.get("hello")); This example shows how to create a cache named CacheWithXMLConfiguration using the XMLStringConfiguration() method to pass the cache configuration as XML: import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.commons.configuration.XMLStringConfiguration; ... private void createCacheWithXMLConfiguration() { String cacheName = "CacheWithXMLConfiguration"; String xml = String.format("<distributed-cache name=\"%s\">" + "<encoding media-type=\"application/x-protostream\"/>" + "<locking isolation=\"READ_COMMITTED\"/>" + "<transaction mode=\"NON_XA\"/>" + "<expiration lifespan=\"60000\" interval=\"20000\"/>" + "</distributed-cache>" , cacheName); manager.administration().getOrCreateCache(cacheName, new XMLStringConfiguration(xml)); System.out.println("Cache with configuration exists or is created."); } Using Hot Rod client properties When you invoke cacheManager.getCache() calls for named caches that do not exist, Data Grid creates them from the Hot Rod client properties instead of returning null. Add cache configuration to hotrod-client.properties as in the following example: 21.5. Accessing the REST API Data Grid provides a RESTful interface that you can interact with using HTTP clients. Prerequisites Expose Data Grid on the network so you can access the REST API. For example, configure a LoadBalancer service or create a Route . Procedure Access the REST API with any HTTP client at USDHOSTNAME:USDPORT/rest/v2 . Replace USDHOSTNAME:USDPORT with the network location where Data Grid listens for client connections. Additional resources Data Grid REST API | [
"metadata: name: infinispan",
"infinispan shell <cluster_name>",
"metadata: name: infinispan",
"import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.SaslQop; import org.infinispan.client.hotrod.impl.ConfigurationProperties; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"USDHOSTNAME\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username(\"username\") .password(\"changeme\") .realm(\"default\") .saslQop(SaslQop.AUTH) .saslMechanism(\"SCRAM-SHA-512\") .ssl() .sniHostName(\"USDSERVICE_HOSTNAME\") .trustStoreFileName(\"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\") .trustStoreType(\"pem\");",
"Connection infinispan.client.hotrod.server_list=USDHOSTNAME:USDPORT Authentication infinispan.client.hotrod.use_auth=true infinispan.client.hotrod.auth_username=developer infinispan.client.hotrod.auth_password=USDPASSWORD infinispan.client.hotrod.auth_server_name=USDCLUSTER_NAME infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512 Encryption infinispan.client.hotrod.sni_host_name=USDSERVICE_HOSTNAME infinispan.client.hotrod.trust_store_file_name=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt infinispan.client.hotrod.trust_store_type=pem",
"import org.infinispan.client.hotrod.configuration.ClientIntelligence; import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.SaslQop; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"USDHOSTNAME\") .port(\"USDPORT\") .security().authentication() .username(\"username\") .password(\"changeme\") .realm(\"default\") .saslQop(SaslQop.AUTH) .saslMechanism(\"SCRAM-SHA-512\") .ssl() .sniHostName(\"USDSERVICE_HOSTNAME\") //Create a client trust store with tls.crt from your project. .trustStoreFileName(\"/path/to/truststore.pkcs12\") .trustStorePassword(\"trust_store_password\") .trustStoreType(\"PCKS12\"); builder.clientIntelligence(ClientIntelligence.BASIC);",
"Connection infinispan.client.hotrod.server_list=USDHOSTNAME:USDPORT Client intelligence infinispan.client.hotrod.client_intelligence=BASIC Authentication infinispan.client.hotrod.use_auth=true infinispan.client.hotrod.auth_username=developer infinispan.client.hotrod.auth_password=USDPASSWORD infinispan.client.hotrod.auth_server_name=USDCLUSTER_NAME infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512 Encryption infinispan.client.hotrod.sni_host_name=USDSERVICE_HOSTNAME Create a client trust store with tls.crt from your project. infinispan.client.hotrod.trust_store_file_name=/path/to/truststore.pkcs12 infinispan.client.hotrod.trust_store_password=trust_store_password infinispan.client.hotrod.trust_store_type=PCKS12",
"import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.security() .authentication() .saslMechanism(\"EXTERNAL\") .ssl() .keyStoreFileName(\"/path/to/keystore\") .keyStorePassword(\"keystorepassword\".toCharArray()) .keyStoreType(\"PCKS12\");",
"import org.infinispan.client.hotrod.DefaultTemplate; import org.infinispan.client.hotrod.RemoteCache; import org.infinispan.client.hotrod.RemoteCacheManager; builder.remoteCache(\"my-cache\") .templateName(DefaultTemplate.DIST_SYNC); builder.remoteCache(\"another-cache\") .configuration(\"<infinispan><cache-container><distributed-cache name=\\\"another-cache\\\"><encoding media-type=\\\"application/x-protostream\\\"/></distributed-cache></cache-container></infinispan>\"); try (RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build())) { // Get a remote cache that does not exist. // Rather than return null, create the cache from a template. RemoteCache<String, String> cache = cacheManager.getCache(\"my-cache\"); // Store a value. cache.put(\"hello\", \"world\"); // Retrieve the value and print it. System.out.printf(\"key = %s\\n\", cache.get(\"hello\"));",
"import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.commons.configuration.XMLStringConfiguration; private void createCacheWithXMLConfiguration() { String cacheName = \"CacheWithXMLConfiguration\"; String xml = String.format(\"<distributed-cache name=\\\"%s\\\">\" + \"<encoding media-type=\\\"application/x-protostream\\\"/>\" + \"<locking isolation=\\\"READ_COMMITTED\\\"/>\" + \"<transaction mode=\\\"NON_XA\\\"/>\" + \"<expiration lifespan=\\\"60000\\\" interval=\\\"20000\\\"/>\" + \"</distributed-cache>\" , cacheName); manager.administration().getOrCreateCache(cacheName, new XMLStringConfiguration(xml)); System.out.println(\"Cache with configuration exists or is created.\"); }",
"Add cache configuration infinispan.client.hotrod.cache.my-cache.template_name=org.infinispan.DIST_SYNC infinispan.client.hotrod.cache.another-cache.configuration=<infinispan><cache-container><distributed-cache name=\\\"another-cache\\\"/></cache-container></infinispan> infinispan.client.hotrod.cache.my-other-cache.configuration_uri=file:/path/to/configuration.xml"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/connecting-clients |
Chapter 20. Managing User Authentication | Chapter 20. Managing User Authentication When a user connects to the Red Hat Directory Server, first the user is authenticated. Then, the directory grants access rights and resource limits to the user depending upon the identity established during authentication. This chapter describes tasks for managing users, including configuring the password and account lockout policy for the directory, denying groups of users access to the directory, and limiting system resources available to users depending upon their bind DNs. 20.1. Setting User Passwords You can use an entry to bind to the directory only if it has a userPassword attribute and if it has not been inactivated. Because user passwords are stored in the directory, the user passwords can be set or reset with any LDAP operation, such as using the ldapmodify utility. When an administrator changes the password of a user, Directory Server sets the pwdReset operational attribute in the user's entry to true . Applications can use this attribute to identify if a password of a user has been reset by an administrator. For information on creating and modifying directory entries, see Chapter 3, Managing Directory Entries . For information on inactivating user accounts, see Section 20.16, "Manually Inactivating Users and Roles" . Only password administrators, described in Section 20.2, "Setting Password Administrators" , and the root DN can add pre-hashed passwords. These users can also violate password policies. Warning When using a password administrator account or the Directory Manager (root DN) to set a password, password policies are bypassed and not verified. Do not use these accounts for regular user password management. Use them only to perform password administration tasks that require bypassing the password policies. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/user_account_management |
Chapter 9. Troubleshooting Installation on 64-bit AMD, Intel, and ARM Systems | Chapter 9. Troubleshooting Installation on 64-bit AMD, Intel, and ARM Systems This chapter discusses some common installation problems and their solutions. For debugging purposes, Anaconda logs installation actions into files in the /tmp directory. These files are listed in the following table. Table 9.1. Log Files Generated During the Installation Log file Contents /tmp/anaconda.log general Anaconda messages /tmp/program.log all external programs run during the installation /tmp/storage.log extensive storage module information /tmp/packaging.log yum and rpm package installation messages /tmp/syslog hardware-related system messages If the installation fails, the messages from these files are consolidated into /tmp/anaconda-tb- identifier , where identifier is a random string. After successful installation, by default, these files will be copied to the installed system under the directory /var/log/anaconda/ . However, if installation is unsuccessful, or if the inst.nosave=all or inst.nosave=logs options are used when booting the installation system, these logs will only exist in the installation program's RAM disk. This means they are not saved permanently and will be lost once the system is powered down. To store them permanently, copy those files to another system on the network by using scp on the system running the installation program, or copy them to a mounted storage device (such as an USB flash drive). Details on how to transfer the log files over the network are below. Note that if you use an USB flash drive or other removable media, you should make sure to back up any data on it before starting the procedure. Procedure 9.1. Transferring Log Files Onto a USB Drive On the system you are installing, press Ctrl + Alt + F2 to access a shell prompt. You will be logged into a root account and you will have access to the installation program's temporary file system. Connect a USB flash drive to the system and execute the dmesg command. A log detailing all recent events will be displayed. At the bottom of this log, you will see a set of messages caused by the USB flash drive you just connected. It will look like a set of lines similar to the following: Note the name of the connected device - in the above example, it is sdb . Go to the /mnt directory and once there, create new directory which will serve as the mount target for the USB drive. The name of the directory does not matter; this example uses the name usb . Mount the USB flash drive onto the newly created directory. Note that in most cases, you do not want to mount the whole drive, but a partition on it. Therefore, do not use the name sdb - use the name of the partition you want to write the log files to. In this example, the name sdb1 is used. You can now verify that you mounted the correct device and partition by accessing it and listing its contents - the list should match what you expect to be on the drive. Copy the log files to the mounted device. Unmount the USB flash drive. If you get an error message saying that the target is busy, change your working directory to outside the mount (for example, / ). The log files from the installation are now saved on the USB flash drive. Procedure 9.2. Transferring Log Files Over the Network On the system you are installing, press Ctrl + Alt + F2 to access a shell prompt. You will be logged into a root account and you will have access to the installation program's temporary file system. Switch to the /tmp directory where the log files are located: Copy the log files onto another system on the network using the scp command: Replace user with a valid user name on the target system, address with the target system's address or host name, and path with the path to the directory you want to save the log files into. For example, if you want to log in as john to a system with an IP address of 192.168.0.122 and place the log files into the /home/john/logs/ directory on that system, the command will have the following form: When connecting to the target system for the first time, the SSH client asks you to confirm that the fingerprint of the remote system is correct and that you want to continue: Type yes and press Enter to continue. Then, provide a valid password when prompted. The files will start transferring to the specified directory on the target system. The log files from the installation are now permanently saved on the target system and available for review. 9.1. Trouble Beginning the Installation 9.1.1. System Does Not Boot When UEFI Secure Boot Is Enabled Beta releases of Red Hat Enterprise Linux 7 have their kernels signed with a special public key which is not recognized by standard UEFI Secure Boot implementations. This prevents the system from booting when the Secure Boot technology is enabled. To fix this issue, you must disable UEFI Secure Boot, install the system, and then import the Beta public key using the Machine Owner Key facility. See Section 5.9, "Using a Beta Release with UEFI Secure Boot" for instructions. 9.1.2. Problems with Booting into the Graphical Installation Systems with some video cards have trouble booting into the graphical installation program. If the installation program does not run using its default settings, it attempts to run in a lower resolution mode. If that still fails, the installation program attempts to run in text mode. There are several possible solutions to display issues, most of which involve specifying custom boot options. For more information, see Section 23.1, "Configuring the Installation System at the Boot Menu" . Use the basic graphics mode You can attempt to perform the installation using the basic graphics driver. To do this, either select Troubleshooting > Install Red Hat Enterprise Linux in basic graphics mode in the boot menu, or edit the installation program's boot options and append inst.xdriver=vesa at the end of the command line. Specify the display resolution manually If the installation program fails to detect your screen resolution, you can override the automatic detection and specify it manually. To do this, append the inst.resolution= x option at the boot menu, where x is your display's resolution (for example, 1024x768 ). Use an alternate video driver You can also attempt to specify a custom video driver, overriding the installation program's automatic detection. To specify a driver, use the inst.xdriver= x option, where x is the device driver you want to use (for example, nouveau ). Note If specifying a custom video driver solves your problem, you should report it as a bug at https://bugzilla.redhat.com under the anaconda component. Anaconda should be able to detect your hardware automatically and use the appropriate driver without your intervention. Perform the installation using VNC If the above options fail, you can use a separate system to access the graphical installation over the network, using the Virtual Network Computing (VNC) protocol. For details on installing using VNC, see Chapter 25, Using VNC . 9.1.3. Serial Console Not Detected In some cases, attempting to install in text mode using a serial console will result in no output on the console. This happens on systems which have a graphics card, but no monitor connected. If Anaconda detects a graphics card, it will attempt to use it for a display, even if no display is connected. If you want to perform a text-based installation on a serial console, use the inst.text and console= boot options. See Chapter 23, Boot Options for more details. | [
"[ 170.171135] sd 5:0:0:0: [sdb] Attached SCSI removable disk",
"mkdir usb",
"mount /dev/sdb1 /mnt/usb",
"cd /mnt/usb",
"ls",
"cp /tmp/*log /mnt/usb",
"umount /mnt/usb",
"cd /tmp",
"scp *log user @ address : path",
"scp *log [email protected]:/home/john/logs/",
"The authenticity of host '192.168.0.122 (192.168.0.122)' can't be established. ECDSA key fingerprint is a4:60:76:eb:b2:d0:aa:23:af:3d:59:5c:de:bb:c4:42. Are you sure you want to continue connecting (yes/no)?"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-troubleshooting-x86 |
Chapter 12. Switch back to the primary site | Chapter 12. Switch back to the primary site These procedures switch back to the primary site back after a failover or switchover to the secondary site. In a setup as outlined in Concepts for active-passive deployments together with the blueprints outlined in Building blocks active-passive deployments. 12.1. When to use this procedure These procedures bring the primary site back to operation when the secondary site is handling all the traffic. At the end of the chapter, the primary site is online again and handles the traffic. This procedure is necessary when the primary site has lost its state in Data Grid, a network partition occurred between the primary and the secondary site while the secondary site was active, or the replication was disabled as described in the Switch over to the secondary site chapter. If the data in Data Grid on both sites is still in sync, the procedure for Data Grid can be skipped. See the Multi-site deployments chapter for different operational procedures. 12.2. Procedures 12.2.1. Data Grid Cluster For the context of this chapter, Site-A is the primary site, recovering back to operation, and Site-B is the secondary site, running in production. After the Data Grid in the primary site is back online and has joined the cross-site channel (see Deploy Data Grid for HA with the Data Grid Operator #verifying-the-deployment on how to verify the Data Grid deployment), the state transfer must be manually started from the secondary site. After clearing the state in the primary site, it transfers the full state from the secondary site to the primary site, and it must be completed before the primary site can start handling incoming requests. Warning Transferring the full state may impact the Data Grid cluster perform by increasing the response time and/or resources usage. The first procedure is to delete any stale data from the primary site. Log in to the primary site. Shutdown Red Hat build of Keycloak. This action will clear all Red Hat build of Keycloak caches and prevents the state of Red Hat build of Keycloak from being out-of-sync with Data Grid. When deploying Red Hat build of Keycloak using the Red Hat build of Keycloak Operator, change the number of Red Hat build of Keycloak instances in the Red Hat build of Keycloak Custom Resource to 0. Connect into Data Grid Cluster using the Data Grid CLI tool: Command: oc -n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222 It asks for the username and password for the Data Grid cluster. Those credentials are the one set in the Deploy Data Grid for HA with the Data Grid Operator chapter in the configuring credentials section. Output: Username: developer Password: [infinispan-0-29897@ISPN//containers/default]> Note The pod name depends on the cluster name defined in the Data Grid CR. The connection can be done with any pod in the Data Grid cluster. Disable the replication from primary site to the secondary site by running the following command. It prevents the clear request to reach the secondary site and delete all the correct cached data. Command: site take-offline --all-caches --site=site-b Output: { "offlineClientSessions" : "ok", "authenticationSessions" : "ok", "sessions" : "ok", "clientSessions" : "ok", "work" : "ok", "offlineSessions" : "ok", "loginFailures" : "ok", "actionTokens" : "ok" } Check the replication status is offline . Command: site status --all-caches --site=site-b Output: { "status" : "offline" } If the status is not offline , repeat the step. Warning Make sure the replication is offline otherwise the clear data will clear both sites. Clear all the cached data in primary site using the following commands: Command: clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work These commands do not print any output. Re-enable the cross-site replication from primary site to the secondary site. Command: site bring-online --all-caches --site=site-b Output: { "offlineClientSessions" : "ok", "authenticationSessions" : "ok", "sessions" : "ok", "clientSessions" : "ok", "work" : "ok", "offlineSessions" : "ok", "loginFailures" : "ok", "actionTokens" : "ok" } Check the replication status is online . Command: site status --all-caches --site=site-b Output: { "status" : "online" } Now we are ready to transfer the state from the secondary site to the primary site. Log in into your secondary site. Connect into Data Grid Cluster using the Data Grid CLI tool: Command: oc -n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222 It asks for the username and password for the Data Grid cluster. Those credentials are the one set in the Deploy Data Grid for HA with the Data Grid Operator chapter in the configuring credentials section. Output: Username: developer Password: [infinispan-0-29897@ISPN//containers/default]> Note The pod name depends on the cluster name defined in the Data Grid CR. The connection can be done with any pod in the Data Grid cluster. Trigger the state transfer from the secondary site to the primary site. Command: site push-site-state --all-caches --site=site-a Output: { "offlineClientSessions" : "ok", "authenticationSessions" : "ok", "sessions" : "ok", "clientSessions" : "ok", "work" : "ok", "offlineSessions" : "ok", "loginFailures" : "ok", "actionTokens" : "ok" } Check the replication status is online for all caches. Command: site status --all-caches --site=site-a Output: { "status" : "online" } Wait for the state transfer to complete by checking the output of push-site-status command for all caches. Command: site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work Output: { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } { "site-a" : "OK" } Check the table in this section for the Cross-Site Documentation for the possible status values. If an error is reported, repeat the state transfer for that specific cache. Command: site push-site-state --cache=<cache-name> --site=site-a Clear/reset the state transfer status with the following command Command: site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work Output: "ok" "ok" "ok" "ok" "ok" "ok" "ok" "ok" Log in to the primary site. Start Red Hat build of Keycloak. When deploying Red Hat build of Keycloak using the Red Hat build of Keycloak Operator, change the number of Red Hat build of Keycloak instances in the Red Hat build of Keycloak Custom Resource to the original value. Both Data Grid clusters are in sync and the switchover from secondary back to the primary site can be performed. 12.2.2. AWS Aurora Database Assuming a Regional multi-AZ Aurora deployment, the current writer instance should be in the same region as the active Red Hat build of Keycloak cluster to avoid latencies and communication across availability zones. Switching the writer instance of Aurora will lead to a short downtime. The writer instance in the other site with a slightly longer latency might be acceptable for some deployments. Therefore, this situation might be deferred to a maintenance window or skipped depending on the circumstances of the deployment. To change the writer instance, run a failover. This change will make the database unavailable for a short time. Red Hat build of Keycloak will need to re-establish database connections. To fail over the writer instance to the other AZ, issue the following command: aws rds failover-db-cluster --db-cluster-identifier ... 12.2.3. Route53 If switching over to the secondary site has been triggered by changing the health endpoint, edit the health check in AWS to point to a correct endpoint ( health/live ). After some minutes, the clients will notice the change and traffic will gradually move over to the secondary site. 12.3. Further reading See Concepts to automate Data Grid CLI commands on how to automate Infinispan CLI commands. | [
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"offline\" }",
"clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work",
"site bring-online --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"online\" }",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site push-site-state --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"online\" }",
"site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work",
"{ \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" }",
"site push-site-state --cache=<cache-name> --site=site-a",
"site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work",
"\"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\"",
"aws rds failover-db-cluster --db-cluster-identifier"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/operate-switch-back- |
Chapter 5. Managing Satellite with Ansible | Chapter 5. Managing Satellite with Ansible Satellite Ansible Collections is a set of Ansible modules that interact with the Satellite API. You can manage and automate many aspects of Satellite with Satellite Ansible collections. 5.1. Installing the Satellite Ansible modules Use this procedure to install the Satellite Ansible modules. Procedure Install the package using the following command: 5.2. Viewing the Satellite Ansible modules You can view the installed Satellite Ansible modules by running: Alternatively, you can also see the complete list of Satellite Ansible modules and other related information at Red Hat Ansible Automation Platform . All modules are in the redhat.satellite namespace and can be referred to in the format redhat.satellite._module_name_ . For example, to display information about the activation_key module, enter the following command: | [
"satellite-maintain packages install ansible-collection-redhat-satellite",
"ansible-doc -l redhat.satellite",
"ansible-doc redhat.satellite.activation_key"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/managing-satellite-with-ansible |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 7-6 Tue Oct 30 2018 Vladimir Slavik Release for Red Hat Enterprise Linux 7.6 GA. Revision 7-5 Tue Jan 09 2018 Vladimir Slavik Release for Red Hat Enterprise Linux 7.5 Beta. Revision 7-4 Wed Jul 26 2017 Vladimir Slavik Release for Red Hat Enterprise Linux 7.4. Revision 1-4 Wed Oct 19 2016 Robert Kratky Release for Red Hat Enterprise Linux 7.3. Revision 1-2 Thu Mar 10 2016 Robert Kratky Async release for Red Hat Enterprise Linux 7.2. Revision 1-2 Thu Nov 11 2015 Robert Kratky Release for Red Hat Enterprise Linux 7.2. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/appe-publican-revision_history |
Chapter 88. Kudu | Chapter 88. Kudu Since Camel 3.0 Only producer is supported The Kudu component supports storing and retrieving data from/to Apache Kudu , a free and open source column-oriented data store of the Apache Hadoop ecosystem. 88.1. Dependencies When using kudu with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kudu-starter</artifactId> </dependency> 88.2. Prerequisites You must have a valid Kudu instance running. More information are available at Apache Kudu . 88.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 88.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 88.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 88.4. Component Options The Kudu component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). This allows CamelContext and routes to start in situations where a producer failing would cause the route to fail. With lazy startup, the startup failures can be handled during routing messages via Camel's routing error handlers. Note When the first message is processed, it may take some time to create and start the producer which will increase the total processing time. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 88.5. Endpoint Options The Kudu endpoint is configured using URI syntax: with the following path and query parameters: 88.5.1. Path Parameters (3 parameters) Name Description Default Type host (common) Host of the server to connect to. String port (common) Port of the server to connect to. String tableName (common) Table to connect to. String 88.5.2. Query Parameters (2 parameters) Name Description Default Type operation (producer) Operation to perform. Enum values: INSERT DELETE UPDATE UPSERT CREATE_TABLE SCAN KuduOperations lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 88.6. Message Headers The Kudu component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKuduSchema (producer) Constant: CAMEL_KUDU_SCHEMA The schema. Schema CamelKuduTableOptions (producer) Constant: CAMEL_KUDU_TABLE_OPTIONS The create table options. CreateTableOptions CamelKuduScanColumnNames (producer) Constant: CAMEL_KUDU_SCAN_COLUMN_NAMES The projected column names for scan operation. List CamelKuduScanPredicate (producer) Constant: CAMEL_KUDU_SCAN_PREDICATE The predicate for scan operation. KuduPredicate CamelKuduScanLimit (producer) Constant: CAMEL_KUDU_SCAN_LIMIT The limit on the number of rows for scan operation. long 88.7. Input Body formats 88.7.1. Insert, delete, update, and upsert The input body format has to be a java.util.Map<String, Object>. This map will represent a row of the table whose elements are columns, where the key is the column name and the value is the value of the column. 88.8. Output Body formats 88.8.1. Scan The output body format will be a java.util.List<java.util.Map<String, Object>>. Each element of the list will be a different row of the table. Each row is a Map<String, Object> whose elements will be each pair of column name and column value for that row. 88.9. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.kudu.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kudu.enabled Whether to enable auto configuration of the kudu component. This is enabled by default. Boolean camel.component.kudu.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kudu-starter</artifactId> </dependency>",
"kudu:host:port/tableName"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kudu-component-starter |
Chapter 8. Troubleshooting Builds | Chapter 8. Troubleshooting Builds The builder instances started by the build manager are ephemeral. This means that they will either get shut down by Red Hat Quay on timeouts or failure, or garbage collected by the control plane (EC2/K8s). In order to obtain the builds logs, you must do so while the builds are running. 8.1. DEBUG config flag The DEBUG flag can be set to true in order to prevent the builder instances from getting cleaned up after completion or failure. For example: EXECUTORS: - EXECUTOR: ec2 DEBUG: true ... - EXECUTOR: kubernetes DEBUG: true ... When set to true , the debug feature prevents the build nodes from shutting down after the quay-builder service is done or fails. It also prevents the build manager from cleaning up the instances by terminating EC2 instances or deleting Kubernetes jobs. This allows debugging builder node issues. Debugging should not be set in a production cycle. The lifetime service still exists; for example, the instance still shuts down after approximately two hours. When this happens, EC2 instances are terminated and Kubernetes jobs are completed. Enabling debug also affects the ALLOWED_WORKER_COUNT because the unterminated instances and jobs still count toward the total number of running workers. As a result, the existing builder workers must be manually deleted if ALLOWED_WORKER_COUNT is reached to be able to schedule new builds . 8.2. Troubleshooting OpenShift Container Platform and Kubernetes Builds Use the following procedure to troubleshooting OpenShift Container Platform Kubernetes Builds. Procedure Create a port forwarding tunnel between your local machine and a pod running with either an OpenShift Container Platform cluster or a Kubernetes cluster by entering the following command: USD oc port-forward <builder_pod> 9999:2222 Establish an SSH connection to the remote host using a specified SSH key and port, for example: USD ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost Obtain the quay-builder service logs by entering the following commands: USD systemctl status quay-builder USD journalctl -f -u quay-builder | [
"EXECUTORS: - EXECUTOR: ec2 DEBUG: true - EXECUTOR: kubernetes DEBUG: true",
"oc port-forward <builder_pod> 9999:2222",
"ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost",
"systemctl status quay-builder",
"journalctl -f -u quay-builder"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/builders_and_image_automation/troubleshooting-builds |
Red Hat Quay architecture | Red Hat Quay architecture Red Hat Quay 3.10 Red Hat Quay Architecture Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_architecture/index |
Chapter 6. Upgrading an overcloud with director-deployed Ceph deployments | Chapter 6. Upgrading an overcloud with director-deployed Ceph deployments If your environment includes director-deployed Red Hat Ceph Storage deployments with or without hyperconverged infrastructure (HCI) nodes, you must upgrade your deployments to Red Hat Ceph Storage 5. With an upgrade to version 5, cephadm now manages Red Hat Ceph Storage instead of ceph-ansible . Note If you are using the Red Hat Ceph Storage Object Gateway (RGW), ensure that all RGW pools have the application label rgw as described in Why are the RGW services crashing after running the cephadm adoption playbook? . Implementing this configuration change addresses a common issue encountered when upgrading from Red Hat Ceph Storage Release 4 to 5. 6.1. Installing ceph-ansible If you deployed Red Hat Ceph Storage using director, you must complete this procedure. The ceph-ansible package is required to upgrade Red Hat Ceph Storage with Red Hat OpenStack Platform. Procedure Enable the Ceph 5 Tools repository: Install the ceph-ansible package: 6.2. Downloading Red Hat Ceph Storage containers to the undercloud from Satellite If the Red Hat Ceph Storage container image is hosted on a Red Hat Satellite Server, then you must download a copy of the image to the undercloud before starting the Red Hat Ceph Storage upgrade using Red Hat Satellite. Prerequisite The required Red Hat Ceph Storage container image is hosted on the Satellite Server. Procedure Log in to the undercloud node as the stack user. Download the Red Hat Ceph Storage container image from the Satellite Server: Replace <ceph_image_file> with the Red Hat Ceph Storage container image file hosted on the Satellite Server. The following is an example of this command: 6.3. Upgrading to Red Hat Ceph Storage 5 Upgrade the following nodes from Red Hat Ceph Storage version 4 to version 5: Red Hat Ceph Storage nodes Hyperconverged infrastructure (HCI) nodes, which contain combined Compute and Ceph OSD services For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact . Note Red Hat Ceph Storage 5 uses Prometheus v4.10, which has the following known issue: If you enable Red Hat Ceph Storage dashboard, two data sources are configured on the dashboard. For more information about this known issue, see BZ#2054852 . Red Hat Ceph Storage 6 uses Prometheus v4.12, which does not include this known issue. Red Hat recommends upgrading from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 6 after the upgrade from Red Hat OpenStack Platform (RHOSP) 16.2 to 17.1 is complete. To upgrade from Red Hat Ceph Storage version 5 to version 6, begin with one of the following procedures for your environment: Director-deployed Red Hat Ceph Storage environments: Updating the cephadm client External Red Hat Ceph Storage cluster environments: Updating the Red Hat Ceph Storage container image Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the Red Hat Ceph Storage external upgrade process with the ceph tag: Replace <stack> with the name of your stack. If you are running this command at a DCN deployed site, add the value skip-tag cleanup_cephansible to the provided comma-separated list of values for the --skip-tags parameter. Run the ceph versions command to confirm all Red Hat Ceph Storage daemons have been upgraded to version 5. This command is available in the ceph monitor container that is hosted by default on the Controller node. Important The command in the step runs the ceph-ansible rolling_update.yaml playbook to update the cluster from version 4 to 5. It is important to confirm all daemons have been updated before proceeding with this procedure. The following example demonstrates the use and output of this command. As demonstrated in the example, all daemons in your deployment should show a package version of 16.2.* and the keyword pacific . Note The output of the command sudo podman ps | grep ceph on any server hosting Red Hat Ceph Storage should return a version 5 container. Create the ceph-admin user and distribute the appropriate keyrings: Update the packages on the Red Hat Ceph Storage nodes: If you are running this command at a DCN deployed site, add the value skip-tag cleanup_cephansible to the provided comma-separated list of values for the --skip-tags parameter. Note By default, the Ceph Monitor service (CephMon) runs on the Controller nodes unless you have used the composable roles feature to host them elsewhere. This command includes the ceph_mon tag, which also updates the packages on the nodes hosting the Ceph Monitor service (the Controller nodes by default). Configure the Red Hat Ceph Storage nodes to use cephadm : If you are running this command at a DCN deployed site, add the value skip-tag cleanup_cephansible to the provided comma-separated list of values for the --skip-tags parameter. Run the ceph -s command to confirm all processes are now managed by Red Hat Ceph Storage orchestrator. This command is available in the ceph monitor container that is hosted by default on the Controller node. Important The command in the step runs the ceph-ansible cephadm-adopt.yaml playbook to move future management of the cluster from ceph-ansible to cephadm and the Red Hat Ceph Storage orchestrator. It is important to confirm all processes are now managed by the orcestrator before proceeding with this procedure. The following example demonstrates the use and output of this command. As demonstrated in this example, there are 63 daemons that are not managed by cephadm . This indicates there was a problem with the running of the ceph-ansible cephadm-adopt.yml playbook. Contact Red Hat Ceph Storage support to troubleshoot these errors before proceeding with the upgrade. When the adoption process has been completed successfully, there should not be any warning about stray daemons not managed by cephadm . Modify the overcloud_upgrade_prepare.sh file to replace the ceph-ansible file with a cephadm heat environment file: Important Do not include ceph-ansible environment or deployment files, for example, environments/ceph-ansible/ceph-ansible.yaml or deployment/ceph-ansible/ceph-grafana.yaml , in openstack deployment commands such as openstack overcloud upgrade prepare and openstack overcloud deploy . For more information about replacing ceph-ansible environment and deployment files with cephadm files, see Implications of upgrading to Red Hat Ceph Storage 5 . Note This example uses the environments/cephadm/cephadm-rbd-only.yaml file because RGW is not deployed. If you plan to deploy RGW, use environments/cephadm/cephadm.yaml after you finish upgrading your RHOSP environment, and then run a stack update. Modify the overcloud_upgrade_prepare.sh file to remove the following environment file if you added it earlier when you ran the overcloud upgrade preparation: Save the file. Run the upgrade preparation command: If your deployment includes HCI nodes, create a temporary hci.conf file in a cephadm container of a Controller node: Log in to a Controller node: Replace <controller_ip> with the IP address of the Controller node. Retrieve a cephadm shell from the Controller node: Example In the cephadm shell, create a temporary hci.conf file: Example Apply the configuration: Example For more information about adjusting the configuration of your HCI deployment, see Ceph configuration overrides for HCI in Deploying a hyperconverged infrastructure . Important You must upgrade the operating system on all HCI nodes to RHEL 9. For more information on upgrading Compute and HCI nodes, see Upgrading Compute nodes to RHEL 9.2 . Important If Red Hat Ceph Storage Rados Gateway (RGW) is used for object storage, complete the steps in Ceph config overrides set for the RGWs on the RHCS 4.x does not get reflected after the Upgrade to RHCS 5.x to ensure your Red Hat Ceph Storage 4 configuration is reflected completely in Red Hat Ceph Storage 5. Important If the Red Hat Ceph Storage Dashboard is installed, complete the steps in After FFU 16.2 to 17.1, Ceph Grafana dashboard failed to start due to incorrect dashboard configuration to ensure it is properly configured. 6.4. Implications of upgrading to Red Hat Ceph Storage 5 The Red Hat Ceph Storage cluster is now upgraded to version 5. This has the following implications: You no longer use ceph-ansible to manage Red Hat Ceph Storage. Instead, the Ceph Orchestrator manages the Red Hat Ceph Storage cluster. For more information about the Ceph Orchestrator, see The Ceph Operations Guide . You no longer need to perform stack updates to make changes to the Red Hat Ceph Storage cluster in most cases. Instead, you can run day two Red Hat Ceph Storage operations directly on the cluster as described in The Ceph Operations Guide . You can also scale Red Hat Ceph Storage cluster nodes up or down as described in Scaling the Ceph Storage cluster in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . You can inspect the Red Hat Ceph Storage cluster's health. For more information about monitoring your cluster's health, see Monitoring Red Hat Ceph Storage nodes in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Do not include environment files or deployment files, for example, environments/ceph-ansible/ceph-ansible.yaml or deployment/ceph-ansible/ceph-grafana.yaml , in openstack deployment commands such as openstack overcloud upgrade prepare and openstack overcloud deploy . If your deployment includes ceph-ansible environment or deployment files, replace them with one of the following options: Red Hat Ceph Storage deployment Original ceph-ansible file Cephadm file replacement Ceph RADOS Block Device (RBD) only environments/ceph-ansible/ceph-ansible.yaml environments/cephadm/cephadm-rbd-only.yaml RBD and the Ceph Object Gateway (RGW) environments/ceph-ansible/ceph-rgw.yaml environments/cephadm/cephadm.yaml Ceph Dashboard environments/ceph-ansible/ceph-dashboard.yaml Respective file in environments/cephadm/ Ceph MDS environments/ceph-ansible/ceph-mds.yaml Respective file in environments/cephadm/ | [
"[stack@director ~]USD sudo subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms",
"[stack@director ~]USD sudo dnf install -y ceph-ansible",
"sudo podman pull <ceph_image_file>",
"sudo podman pull satellite.example.com/container-images-osp-17_1-rhceph-5-rhel8:latest",
"source ~/stackrc",
"openstack overcloud external-upgrade run --skip-tags \"ceph_ansible_remote_tmp\" --stack <stack> --tags ceph,facts 2>&1",
"sudo podman exec ceph-mon-USD(hostname -f) ceph versions { \"mon\": { \"ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)\": 3 }, \"mgr\": { \"ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)\": 3 }, \"osd\": { \"ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)\": 180 }, \"mds\": {}, \"rgw\": { \"ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)\": 3 }, \"overall\": { \"ceph version 16.2.10-248.el8cp (0edb63afd9bd3edb333364f2e0031b77e62f4896) pacific (stable)\": 189 } }",
"ANSIBLE_LOG_PATH=/home/stack/cephadm_enable_user_key.log ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook -i /home/stack/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml -b -e ansible_python_interpreter=/usr/libexec/platform-python /usr/share/ansible/tripleo-playbooks/ceph-admin-user-playbook.yml -e tripleo_admin_user=ceph-admin -e distribute_private_key=true --limit Undercloud,ceph_mon,ceph_mgr,ceph_rgw,ceph_mds,ceph_nfs,ceph_grafana,ceph_osd",
"openstack overcloud upgrade run --stack <stack> --skip-tags ceph_ansible_remote_tmp --tags setup_packages --limit Undercloud,ceph_mon,ceph_mgr,ceph_rgw,ceph_mds,ceph_nfs,ceph_grafana,ceph_osd --playbook /home/stack/overcloud-deploy/<stack>/config-download/<stack>/upgrade_steps_playbook.yaml 2>&1",
"openstack overcloud external-upgrade run --skip-tags ceph_ansible_remote_tmp --stack <stack> --tags cephadm_adopt 2>&1",
"sudo cephadm shell -- ceph -s cluster: id: f5a40da5-6d88-4315-9bb3-6b16df51d765 health: HEALTH_WARN 63 stray daemon(s) not managed by cephadm",
"#!/bin/bash openstack overcloud upgrade prepare --yes --timeout 460 --templates /usr/share/openstack-tripleo-heat-templates --ntp-server 192.168.24.1 --stack <stack> -r /home/stack/roles_data.yaml -e /home/stack/templates/internal.yaml ... -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml -e ~/containers-prepare-parameter.yaml",
"-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yaml",
"source stackrc chmod 755 /home/stack/overcloud_upgrade_prepare.sh sh /home/stack/overcloud_upgrade_prepare.sh",
"ssh cloud-admin@<controller_ip>",
"[cloud-admin@controller-0 ~]USD sudo cephadm shell",
"cat <<EOF > hci.conf [osd] osd_memory_target_autotune = true osd_numa_auto_affinity = true [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2 EOF ...",
"ceph config assimilate-conf -i hci.conf"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/framework_for_upgrades_16.2_to_17.1/upgrading-an-overcloud-with-director-deployed-ceph-deployments_preparing-overcloud |
Chapter 5. Deploying standalone Multicloud Object Gateway | Chapter 5. Deploying standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: * Installing the Local Storage Operator. * Installing Red Hat OpenShift Data Foundation Operator * Creating standalone Multicloud Object Gateway :leveloffset: +2 | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_z/deploy-standalone-multicloud-object-gateway-ibm-z |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/snip-conscious-language_getting-started |
probe::socket.close.return | probe::socket.close.return Name probe::socket.close.return - Return from closing a socket Synopsis Values name Name of this probe Context The requester (user process or kernel) Description Fires at the conclusion of closing a socket. | [
"socket.close.return"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-socket-close-return |
18.2. Mounting a File System | 18.2. Mounting a File System To attach a certain file system, use the mount command in the following form: The device can be identified by a full path to a block device (for example, " /dev/sda3 " ), a universally unique identifier ( UUID ; for example, " UUID=34795a28-ca6d-4fd8-a347-73671d0c19cb " ), or a volume label (for example, " LABEL=home " ). Note that while a file system is mounted, the original content of the directory is not accessible. Important Linux does not prevent a user from mounting a file system to a directory with a file system already attached to it. To determine whether a particular directory serves as a mount point, run the findmnt utility with the directory as its argument and verify the exit code: If no file system is attached to the directory, the above command returns 1 . When the mount command is run without all required information (that is, without the device name, the target directory, or the file system type), it reads the content of the /etc/fstab configuration file to see if the given file system is listed. This file contains a list of device names and the directories in which the selected file systems should be mounted, as well as the file system type and mount options. Because of this, when mounting a file system that is specified in this file, you can use one of the following variants of the command: Note that permissions are required to mount the file systems unless the command is run as root (see Section 18.2.2, "Specifying the Mount Options" ). Note To determine the UUID and, if the device uses it, the label of a particular device, use the blkid command in the following form: For example, to display information about /dev/sda3 , type: 18.2.1. Specifying the File System Type In most cases, mount detects the file system automatically. However, there are certain file systems, such as NFS (Network File System) or CIFS (Common Internet File System), that are not recognized, and need to be specified manually. To specify the file system type, use the mount command in the following form: Table 18.1, "Common File System Types" provides a list of common file system types that can be used with the mount command. For a complete list of all available file system types, consult the relevant manual page as referred to in Section 18.4.1, "Manual Page Documentation" . Table 18.1. Common File System Types Type Description ext2 The ext2 file system. ext3 The ext3 file system. ext4 The ext4 file system. iso9660 The ISO 9660 file system. It is commonly used by optical media, typically CDs. nfs The NFS file system. It is commonly used to access files over the network. nfs4 The NFSv4 file system. It is commonly used to access files over the network. udf The UDF file system. It is commonly used by optical media, typically DVDs. vfat The FAT file system. It is commonly used on machines that are running the Windows operating system, and on certain digital media such as USB flash drives or floppy disks. See Example 18.2, "Mounting a USB Flash Drive" for an example usage. Example 18.2. Mounting a USB Flash Drive Older USB flash drives often use the FAT file system. Assuming that such drive uses the /dev/sdc1 device and that the /media/flashdisk/ directory exists, mount it to this directory by typing the following at a shell prompt as root : | [
"mount [ option ... ] device directory",
"findmnt directory ; echo USD?",
"mount [ option ... ] directory mount [ option ... ] device",
"blkid device",
"~]# blkid /dev/sda3 /dev/sda3: LABEL=\"home\" UUID=\"34795a28-ca6d-4fd8-a347-73671d0c19cb\" TYPE=\"ext3\"",
"mount -t type device directory",
"~]# mount -t vfat /dev/sdc1 /media/flashdisk"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/sect-Using_the_mount_Command-Mounting |
C.4. Other Restrictions | C.4. Other Restrictions For the list of all other restrictions and issues affecting virtualization read the Red Hat Enterprise Linux 7 Release Notes . The Red Hat Enterprise Linux 7 Release Notes cover the present new features, known issues, and restrictions as they are updated or discovered. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtualization_restrictions-other_restrictions |
Chapter 3. Event-driven APIs | Chapter 3. Event-driven APIs Many of the APIs provided with AMQ Clients are asynchronous, event-driven APIs. These include the C++, JavaScript, Python, and Ruby APIs. These APIs work by executing application event-handling functions in response to network activity. The library monitors network I/O and fires events. The event handlers run sequentially on the main library thread. Because the event handlers run on the main library thread, the handler code must not contain any long-running blocking operations. Blocking in an event handler blocks all library execution. If you need to execute a long blocking operation, you must call it on a separate thread. The event-driven APIs include cross-thread communication facilities to support coordination between the library thread and application threads. Avoid blocking in event handlers Long-running blocking calls in event handlers stop all library execution, preventing the library from handling other events and performing periodic tasks. Always start long-running blocking procedures in a separate application thread. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/amq_clients_overview/event_driven_apis |
Chapter 10. Volume Snapshots | Chapter 10. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 10.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites PVC must be in Bound state and must not be in use. Note OpenShift Container Storage only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 10.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be restored as a new PVC. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note (For Rados Block Device (RBD)) You must select a storage class with the same pool as that of the parent PVC. Click Restore . You will be redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. Beside the desired volume snapshot click Action Menu (...) Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note (For Rados Block Device (RBD)) You must select a storage class with the same pool as that of the parent PVC. Click Restore . You will be redirected to the new PVC details page. Note When you restore volume snapshots, the PVCs are created with the access mode of the parent PVC only if the parent PVC exists. Otherwise, the PVCs are created only with the ReadWriteOnce (RWO) access mode. Currently, you cannot specify the access mode using the OpenShift Web Console. However, you can specify the access mode from the CLI using the YAML. For more information, see Restoring a volume snapshot . Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 10.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/volume-snapshots_osp |
Chapter 17. KubeAPIServer [operator.openshift.io/v1] | Chapter 17. KubeAPIServer [operator.openshift.io/v1] Description KubeAPIServer provides information to configure an operator to manage kube-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes API Server status object status is the most recently observed status of the Kubernetes API Server 17.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes API Server Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 17.1.2. .status Description status is the most recently observed status of the Kubernetes API Server Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state serviceAccountIssuers array serviceAccountIssuers tracks history of used service account issuers. The item without expiration time represents the currently used service account issuer. The other items represents service account issuers that were used previously and are still being trusted. The default expiration for the items is set by the platform and it defaults to 24h. see: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection serviceAccountIssuers[] object version string version is the level this availability applies to 17.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 17.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 17.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 17.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 17.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 17.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Required nodeName Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 17.1.9. .status.serviceAccountIssuers Description serviceAccountIssuers tracks history of used service account issuers. The item without expiration time represents the currently used service account issuer. The other items represents service account issuers that were used previously and are still being trusted. The default expiration for the items is set by the platform and it defaults to 24h. see: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection Type array 17.1.10. .status.serviceAccountIssuers[] Description Type object Property Type Description expirationTime string expirationTime is the time after which this service account issuer will be pruned and removed from the trusted list of service account issuers. name string name is the name of the service account issuer 17.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubeapiservers DELETE : delete collection of KubeAPIServer GET : list objects of kind KubeAPIServer POST : create a KubeAPIServer /apis/operator.openshift.io/v1/kubeapiservers/{name} DELETE : delete a KubeAPIServer GET : read the specified KubeAPIServer PATCH : partially update the specified KubeAPIServer PUT : replace the specified KubeAPIServer /apis/operator.openshift.io/v1/kubeapiservers/{name}/status GET : read status of the specified KubeAPIServer PATCH : partially update status of the specified KubeAPIServer PUT : replace status of the specified KubeAPIServer 17.2.1. /apis/operator.openshift.io/v1/kubeapiservers HTTP method DELETE Description delete collection of KubeAPIServer Table 17.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeAPIServer Table 17.2. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeAPIServer Table 17.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.4. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.5. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 202 - Accepted KubeAPIServer schema 401 - Unauthorized Empty 17.2.2. /apis/operator.openshift.io/v1/kubeapiservers/{name} Table 17.6. Global path parameters Parameter Type Description name string name of the KubeAPIServer HTTP method DELETE Description delete a KubeAPIServer Table 17.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 17.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeAPIServer Table 17.9. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeAPIServer Table 17.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeAPIServer Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 401 - Unauthorized Empty 17.2.3. /apis/operator.openshift.io/v1/kubeapiservers/{name}/status Table 17.15. Global path parameters Parameter Type Description name string name of the KubeAPIServer HTTP method GET Description read status of the specified KubeAPIServer Table 17.16. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeAPIServer Table 17.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.18. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeAPIServer Table 17.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.20. Body parameters Parameter Type Description body KubeAPIServer schema Table 17.21. HTTP responses HTTP code Reponse body 200 - OK KubeAPIServer schema 201 - Created KubeAPIServer schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/kubeapiserver-operator-openshift-io-v1 |
Chapter 10. Creating a New Camel XML file | Chapter 10. Creating a New Camel XML file Overview Apache Camel stores routes in an XML file that contains a camelContext element. When you create a new Fuse Integration project, the tooling provides an Apache Camel context (XML) file as part of the project by default. You can also add a new Camel XML file that includes all of the required namespaces preconfigured and a template camelContext element. Procedure To add a new Apache Camel context file to your project: Select File New Camel XML File from the main menu to open the Camel XML File wizard, as shown in Figure 10.1, "Camel XML File wizard" . Figure 10.1. Camel XML File wizard In RouteContainer , enter the location for the new file, or accept the default. You can click to search for an appropriate location. Important The Spring framework and the OSGi Blueprint framework require that all Apache Camel files be placed in specific locations under the project's META-INF or OSGI-INF folder: Spring - projectName/src/main/resources/META-INF/spring/ OSGi Blueprint - projectName/src/main/resources/OSGI-INF/blueprint/ In File Name , enter a name for the new context file, or accept the default ( camelContext.xml ). The file's name cannot contain spaces or special characters, and it must be unique within the JVM. In Framework , accept the default, or select which framework the routes will use: Spring - [default] for routes that will be deployed in Spring containers, non-OSGi containers, or as standalone applications OSGi Blueprint - for routes that will be deployed in OSGi containers Routes - for routes that you can load and add into existing camelContext s Click Finish . The new context file is added to the project and opened in the route editor. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/FIDENewRouteFile |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.4/making-open-source-more-inclusive |
Chapter 32. Additional resources | Chapter 32. Additional resources Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 Planning a Red Hat Process Automation Manager installation Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 3 using templates | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/additional_resources_3 |
function::proc_mem_data | function::proc_mem_data Name function::proc_mem_data - Program data size (data + stack) in pages Synopsis Arguments None Description Returns the current process data size (data + stack) in pages, or zero when there is no current process or the number of pages couldn't be retrieved. | [
"proc_mem_data:long()"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-proc-mem-data |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/pr01 |
Chapter 15. Network flows format reference | Chapter 15. Network flows format reference These are the specifications for network flows format, used both internally and when exporting flows to Kafka. 15.1. Network Flows format reference This is the specification of the network flows format. That format is used when a Kafka exporter is configured, for Prometheus metrics labels as well as internally for the Loki store. The "Filter ID" column shows which related name to use when defining Quick Filters (see spec.consolePlugin.quickFilters in the FlowCollector specification). The "Loki label" column is useful when querying Loki directly: label fields need to be selected using stream selectors . The "Cardinality" column gives information about the implied metric cardinality if this field was to be used as a Prometheus label with the FlowMetrics API. Refer to the FlowMetrics documentation for more information on using this API. Name Type Description Filter ID Loki label Cardinality OpenTelemetry Bytes number Number of bytes n/a no avoid bytes DnsErrno number Error number returned from DNS tracker ebpf hook function dns_errno no fine dns.errno DnsFlags number DNS flags for DNS record n/a no fine dns.flags DnsFlagsResponseCode string Parsed DNS header RCODEs name dns_flag_response_code no fine dns.responsecode DnsId number DNS record id dns_id no avoid dns.id DnsLatencyMs number Time between a DNS request and response, in milliseconds dns_latency no avoid dns.latency Dscp number Differentiated Services Code Point (DSCP) value dscp no fine dscp DstAddr string Destination IP address (ipv4 or ipv6) dst_address no avoid destination.address DstK8S_HostIP string Destination node IP dst_host_address no fine destination.k8s.host.address DstK8S_HostName string Destination node name dst_host_name no fine destination.k8s.host.name DstK8S_Name string Name of the destination Kubernetes object, such as Pod name, Service name or Node name. dst_name no careful destination.k8s.name DstK8S_Namespace string Destination namespace dst_namespace yes fine destination.k8s.namespace.name DstK8S_NetworkName string Destination network name dst_network no fine n/a DstK8S_OwnerName string Name of the destination owner, such as Deployment name, StatefulSet name, etc. dst_owner_name yes fine destination.k8s.owner.name DstK8S_OwnerType string Kind of the destination owner, such as Deployment, StatefulSet, etc. dst_kind no fine destination.k8s.owner.kind DstK8S_Type string Kind of the destination Kubernetes object, such as Pod, Service or Node. dst_kind yes fine destination.k8s.kind DstK8S_Zone string Destination availability zone dst_zone yes fine destination.zone DstMac string Destination MAC address dst_mac no avoid destination.mac DstPort number Destination port dst_port no careful destination.port DstSubnetLabel string Destination subnet label dst_subnet_label no fine n/a Duplicate boolean Indicates if this flow was also captured from another interface on the same host n/a no fine n/a Flags string[] List of TCP flags comprised in the flow, according to RFC-9293, with additional custom flags to represent the following per-packet combinations: - SYN_ACK - FIN_ACK - RST_ACK tcp_flags no careful tcp.flags FlowDirection number Flow interpreted direction from the node observation point. Can be one of: - 0: Ingress (incoming traffic, from the node observation point) - 1: Egress (outgoing traffic, from the node observation point) - 2: Inner (with the same source and destination node) node_direction yes fine host.direction IcmpCode number ICMP code icmp_code no fine icmp.code IcmpType number ICMP type icmp_type no fine icmp.type IfDirections number[] Flow directions from the network interface observation point. Can be one of: - 0: Ingress (interface incoming traffic) - 1: Egress (interface outgoing traffic) ifdirections no fine interface.directions Interfaces string[] Network interfaces interfaces no careful interface.names K8S_ClusterName string Cluster name or identifier cluster_name yes fine k8s.cluster.name K8S_FlowLayer string Flow layer: 'app' or 'infra' flow_layer yes fine k8s.layer NetworkEvents object[] Network events, such as network policy actions, composed of nested fields: - Feature (such as "acl" for network policies) - Type (such as an "AdminNetworkPolicy") - Namespace (namespace where the event applies, if any) - Name (name of the resource that triggered the event) - Action (such as "allow" or "drop") - Direction (Ingress or Egress) network_events no avoid n/a Packets number Number of packets pkt_drop_cause no avoid packets PktDropBytes number Number of bytes dropped by the kernel n/a no avoid drops.bytes PktDropLatestDropCause string Latest drop cause pkt_drop_cause no fine drops.latestcause PktDropLatestFlags number TCP flags on last dropped packet n/a no fine drops.latestflags PktDropLatestState string TCP state on last dropped packet pkt_drop_state no fine drops.lateststate PktDropPackets number Number of packets dropped by the kernel n/a no avoid drops.packets Proto number L4 protocol protocol no fine protocol Sampling number Sampling rate used for this flow n/a no fine n/a SrcAddr string Source IP address (ipv4 or ipv6) src_address no avoid source.address SrcK8S_HostIP string Source node IP src_host_address no fine source.k8s.host.address SrcK8S_HostName string Source node name src_host_name no fine source.k8s.host.name SrcK8S_Name string Name of the source Kubernetes object, such as Pod name, Service name or Node name. src_name no careful source.k8s.name SrcK8S_Namespace string Source namespace src_namespace yes fine source.k8s.namespace.name SrcK8S_NetworkName string Source network name src_network no fine n/a SrcK8S_OwnerName string Name of the source owner, such as Deployment name, StatefulSet name, etc. src_owner_name yes fine source.k8s.owner.name SrcK8S_OwnerType string Kind of the source owner, such as Deployment, StatefulSet, etc. src_kind no fine source.k8s.owner.kind SrcK8S_Type string Kind of the source Kubernetes object, such as Pod, Service or Node. src_kind yes fine source.k8s.kind SrcK8S_Zone string Source availability zone src_zone yes fine source.zone SrcMac string Source MAC address src_mac no avoid source.mac SrcPort number Source port src_port no careful source.port SrcSubnetLabel string Source subnet label src_subnet_label no fine n/a TimeFlowEndMs number End timestamp of this flow, in milliseconds n/a no avoid timeflowend TimeFlowRttNs number TCP Smoothed Round Trip Time (SRTT), in nanoseconds time_flow_rtt no avoid tcp.rtt TimeFlowStartMs number Start timestamp of this flow, in milliseconds n/a no avoid timeflowstart TimeReceived number Timestamp when this flow was received and processed by the flow collector, in seconds n/a no avoid timereceived Udns string[] List of User Defined Networks udns no careful n/a XlatDstAddr string Packet translation destination address xlat_dst_address no avoid n/a XlatDstPort number Packet translation destination port xlat_dst_port no careful n/a XlatSrcAddr string Packet translation source address xlat_src_address no avoid n/a XlatSrcPort number Packet translation source port xlat_src_port no careful n/a ZoneId number Packet translation zone id xlat_zone_id no avoid n/a _HashId string In conversation tracking, the conversation identifier id no avoid n/a _RecordType string Type of record: flowLog for regular flow logs, or newConnection , heartbeat , endConnection for conversation tracking type yes fine n/a | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_observability/json-flows-format-reference |
10.4. XML Representation of a Cluster | 10.4. XML Representation of a Cluster Example 10.1. An XML representation of a cluster | [
"<cluster id=\"00000000-0000-0000-0000-000000000000\" href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000\"> <name>Default</name> <description>The default server cluster</description> <link rel=\"networks\" href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000/networks\"/> <link rel=\"permissions\" href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000/permissions\"/> <link rel=\"glustervolumes\" href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000/glustervolumes\"/> <link rel=\"glusterhooks\" href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000/glusterhooks\"/> <link rel=\"affinitygroups\" href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000/affinitygroups\"/> <cpu id=\"Intel Penryn Family\"/> <architecture>X86_64<architecture/> <data_center id=\"00000000-0000-0000-0000-000000000000\" href=\"/ovirt-engine/api/datacenters/00000000-0000-0000-0000-000000000000\"/> <memory_policy> <overcommit percent=\"100\"/> <transparent_hugepages> <enabled>false</enabled> </transparent_hugepages> </memory_policy> <scheduling_policies> <policy>evenly_distributed</policy> <thresholds low=\"10\" high=\"75\" duration=\"120\"/> </scheduling_policies> <version major=\"4\" minor=\"0\"/> <supported_versions> <version major=\"4\" minor=\"0\"/> </supported_versions> <error_handling> <on_error>migrate</on_error> </error_handling> <virt_service>true</virt_service> <gluster_service>false</gluster_service> <threads_as_cores>false</threads_as_cores> <tunnel_migration>false</tunnel_migration> <trusted_service>false</trusted_service> <ha_reservation>false</ha_reservation> <ballooning_enabled>false</ballooning_enabled> <ksm> <enabled>true</enabled> </ksm> </cluster>"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_host_cluster |
7.111. libvirt | 7.111. libvirt 7.111.1. RHBA-2015:1252 - libvirt bug fix update Updated libvirt packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. Bug Fixes BZ# 1198096 Previously, when the default CPU mask was specified while using Non-Uniform Memory Access (NUMA) pinning, virtual CPUs (vCPUs) could not be pinned to physical CPUs that were not contained in the default node mask. With this update, the control groups (cgroups) code correctly attaches only vCPU threads instead of the entire domain group, and using NUMA pinning with the default cpuset subsystem now works as expected. BZ# 1186142 The interface configuration of any libvirt domain which was of type='network' and referenced an "unmanaged" libvirt network had incorrect XML data for the interface transmitted during a migration, containing the "status" of the interface instead of the name of the network to use ("configuration"). As a consequence, the migration destination tried to set up the domain network interface using the status information from the source of the migration, and the migration thus failed. With this update, libvirt sends the configuration data for each device during migration rather than the status data, and the migration of a domain using interfaces of type='network' now succeeds. BZ# 1149667 In Red Hat Enterprise Linux 6.6, support was added for libvirt to report whether QEMU is capable of creating snapshots. However, libvirt did not probe for the snapshot capability properly. As a consequence, the snapshot capability of KVM Guest Image in VDSM was reported as unavailable even when it was available, and creating a disk snapshot in some cases failed. With this update, libvirt no longer reports QEMU snapshot capability, and therefore does not cause the described problem. BZ# 1138523 Previously, using the "virsh pool-refresh" command, or restarting or refreshing the libvirtd service after renaming a virtual storage volume in some cases caused the "virsh vol-list" to display an incorrect name for the renamed storage volume. This update adds a check for the resulting name, which returns an error if the storage volume name is incorrect. BZ# 1158036 Prior to this update, when using the "virsh save" command to save a domain to an NFS client with the "root squash" access rights reduction while running the libvirtd service with a non-default owner:group configuration, saving the NFS client failed with a "Transport endpoint is not connected" error message. This update ensures that the chmod operation during the saving process correctly specifies the non-default owner:group configuration, and using "virsh save" in the described scenario works as expected. BZ# 1113474 A virtual function (VF) could not be used in the macvtap-passthrough network if it was previously used in the hostdev network. With this update, libvirt ensures that the VF's MAC address is properly adjusted for the macvtap-passthrough network, which allows the VF to be used properly in the described scenario. Users of libvirt are advised to upgrade to these updated packages, which fix these bugs. After installing the updated packages, libvirtd will be restarted automatically. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-libvirt |
Providing Feedback on Red Hat Documentation | Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/performance_tuning_guide/providing-feedback-on-red-hat-documentation_performance-tuning |
Provisioning APIs | Provisioning APIs OpenShift Container Platform 4.12 Reference guide for provisioning APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/provisioning_apis/index |
probe::linuxmib.ListenOverflows | probe::linuxmib.ListenOverflows Name probe::linuxmib.ListenOverflows - Count of times a listen queue overflowed Synopsis linuxmib.ListenOverflows Values sk Pointer to the struct sock being acted on op Value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function linuxmib_filter_key . If the packet passes the filter is is counted in the global ListenOverflows (equivalent to SNMP's MIB LINUX_MIB_LISTENOVERFLOWS) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-linuxmib-listenoverflows |
Chapter 5. Cloud | Chapter 5. Cloud The following chapters contain the most notable changes to public cloud platforms between RHEL 8 and RHEL 9: 5.1. Notable changes to Azure TDX support is available a Technology Preview for RHEL on Azure The Intel Trust Domain Extension (TDX) feature can as a Technology Preview now be used in RHEL 9.4 guest operating systems. If the host system supports TDX, you can deploy hardware-isolated RHEL 9 virtual machines (VMs), called trust domains (TDs). As a result, you will be able to create a CVM image with SecureBoot enabled on the Azure platform. 5.2. Notable changes to GCP TDX support is available a Technology Preview for RHEL on GCP The Intel Trust Domain Extension (TDX) feature can as a Technology Preview now be used in RHEL 9.4 guest operating systems. If the host system supports TDX, you can deploy hardware-isolated RHEL 9 virtual machines (VMs), called trust domains (TDs). With this enhancement, you can use the Intel Trust Domain Extension (TDX) feature in RHEL 9.4 on Google Cloud Platform. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_cloud_considerations-in-adopting-rhel-9 |
Chapter 9. Event discovery | Chapter 9. Event discovery 9.1. Listing event sources and event source types It is possible to view a list of all event sources or event source types that exist or are available for use on your OpenShift Container Platform cluster. You can use the Knative ( kn ) CLI or the Developer perspective in the OpenShift Container Platform web console to list available event sources or event source types. 9.2. Listing event source types from the command line Using the Knative ( kn ) CLI provides a streamlined and intuitive user interface to view available event source types on your cluster. 9.2.1. Listing available event source types by using the Knative CLI You can list event source types that can be created and used on your cluster by using the kn source list-types CLI command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure List the available event source types in the terminal: USD kn source list-types Example output TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink Optional: On OpenShift Container Platform, you can also list the available event source types in YAML format: USD kn source list-types -o yaml 9.3. Listing event source types from the Developer perspective It is possible to view a list of all available event source types on your cluster. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to view available event source types. 9.3.1. Viewing available event source types within the Developer perspective Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Access the Developer perspective. Click +Add . Click Event Source . View the available event source types. 9.4. Listing event sources from the command line Using the Knative ( kn ) CLI provides a streamlined and intuitive user interface to view existing event sources on your cluster. 9.4.1. Listing available event sources by using the Knative CLI You can list existing event sources by using the kn source list command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure List the existing event sources in the terminal: USD kn source list Example output NAME TYPE RESOURCE SINK READY a1 ApiServerSource apiserversources.sources.knative.dev ksvc:eshow2 True b1 SinkBinding sinkbindings.sources.knative.dev ksvc:eshow3 False p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True Optional: You can list event sources of a specific type only, by using the --type flag: USD kn source list --type <event_source_type> Example command USD kn source list --type PingSource Example output NAME TYPE RESOURCE SINK READY p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True | [
"kn source list-types",
"TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink",
"kn source list-types -o yaml",
"kn source list",
"NAME TYPE RESOURCE SINK READY a1 ApiServerSource apiserversources.sources.knative.dev ksvc:eshow2 True b1 SinkBinding sinkbindings.sources.knative.dev ksvc:eshow3 False p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True",
"kn source list --type <event_source_type>",
"kn source list --type PingSource",
"NAME TYPE RESOURCE SINK READY p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/eventing/event-discovery |
Chapter 10. Customizing the system in the installer | Chapter 10. Customizing the system in the installer During the customization phase of the installation, you must perform certain configuration tasks to enable the installation of Red Hat Enterprise Linux. These tasks include: Configuring the storage and assign mount points. Selecting a base environment with software to be installed. Setting a password for the root user or creating a local user. Optionally, you can further customize the system, for example, by configuring system settings and connecting the host to a network. 10.1. Setting the installer language You can select the language to be used by the installation program before starting the installation. Prerequisites You have created installation media. You have specified an installation source if you are using the Boot ISO image file. You have booted the installation. Procedure After you select Install Red hat Enterprise Linux option from the boot menu, the Welcome to Red Hat Enterprise Screen appears. From the left-hand pane of the Welcome to Red Hat Enterprise Linux window, select a language. Alternatively, search the preferred language by using the text box. Note A language is pre-selected by default. If network access is configured, that is, if you booted from a network server instead of local media, the pre-selected language is determined by the automatic location detection feature of the GeoIP module. If you use the inst.lang= option on the boot command line or in your PXE server configuration, then the language that you define with the boot option is selected. From the right-hand pane of the Welcome to Red Hat Enterprise Linux window, select a location specific to your region. Click Continue to proceed to the graphical installations window. If you are installing a pre-release version of Red Hat Enterprise Linux, a warning message is displayed about the pre-release status of the installation media. To continue with the installation, click I want to proceed , or To quit the installation and reboot the system, click I want to exit . 10.2. Configuring the storage devices You can install Red Hat Enterprise Linux on a large variety of storage devices. You can configure basic, locally accessible, storage devices in the Installation Destination window. Basic storage devices directly connected to the local system, such as disks and solid-state drives, are displayed in the Local Standard Disks section of the window. On 64-bit IBM Z, this section contains activated Direct Access Storage Devices (DASDs). Warning A known issue prevents DASDs configured as HyperPAV aliases from being automatically attached to the system after the installation is complete. These storage devices are available during the installation, but are not immediately accessible after you finish installing and reboot. To attach HyperPAV alias devices, add them manually to the /etc/dasd.conf configuration file of the system. 10.2.1. Configuring installation destination You can use the Installation Destination window to configure the storage options, for example, the disks that you want to use as the installation target for your Red Hat Enterprise Linux installation. You must select at least one disk. Prerequisites The Installation Summary window is open. Ensure to back up your data if you plan to use a disk that already contains data. For example, if you want to shrink an existing Microsoft Windows partition and install Red Hat Enterprise Linux as a second system, or if you are upgrading a release of Red Hat Enterprise Linux. Manipulating partitions always carries a risk. For example, if the process is interrupted or fails for any reason data on the disk can be lost. Procedure From the Installation Summary window, click Installation Destination . Perform the following operations in the Installation Destination window opens: From the Local Standard Disks section, select the storage device that you require; a white check mark indicates your selection. Disks without a white check mark are not used during the installation process; they are ignored if you choose automatic partitioning, and they are not available in manual partitioning. The Local Standard Disks shows all locally available storage devices, for example, SATA, IDE and SCSI disks, USB flash and external disks. Any storage devices connected after the installation program has started are not detected. If you use a removable drive to install Red Hat Enterprise Linux, your system is unusable if you remove the device. Optional: Click the Refresh link in the lower right-hand side of the window if you want to configure additional local storage devices to connect new disks. The Rescan Disks dialog box opens. Click Rescan Disks and wait until the scanning process completes. All storage changes that you make during the installation are lost when you click Rescan Disks . Click OK to return to the Installation Destination window. All detected disks including any new ones are displayed under the Local Standard Disks section. Optional: Click Add a disk to add a specialized storage device. The Storage Device Selection window opens and lists all storage devices that the installation program has access to. Optional: Under Storage Configuration , select the Automatic radio button for automatic partitioning. You can also configure custom partitioning. For more details, see Configuring manual partitioning . Optional: Select I would like to make additional space available to reclaim space from an existing partitioning layout. For example, if a disk you want to use already has a different operating system and you want to make this system's partitions smaller to allow more room for Red Hat Enterprise Linux. Optional: Select Encrypt my data to encrypt all partitions except the ones needed to boot the system (such as /boot ) using Linux Unified Key Setup (LUKS). Encrypting your disk to add an extra layer of security. Click Done . The Disk Encryption Passphrase dialog box opens. Type your passphrase in the Passphrase and Confirm fields. Click Save Passphrase to complete disk encryption. Warning If you lose the LUKS passphrase, any encrypted partitions and their data is completely inaccessible. There is no way to recover a lost passphrase. However, if you perform a Kickstart installation, you can save encryption passphrases and create backup encryption passphrases during the installation. For more information, see the Automatically installing RHEL document. Optional: Click the Full disk summary and bootloader link in the lower left-hand side of the window to select which storage device contains the boot loader. For more information, see Configuring boot loader . In most cases it is sufficient to leave the boot loader in the default location. Some configurations, for example, systems that require chain loading from another boot loader require the boot drive to be specified manually. Click Done . Optional: The Reclaim Disk Space dialog box appears if you selected automatic partitioning and the I would like to make additional space available option, or if there is not enough free space on the selected disks to install Red Hat Enterprise Linux. It lists all configured disk devices and all partitions on those devices. The dialog box displays information about the minimal disk space the system needs for an installation with the currently selected package set and how much space you have reclaimed. To start the reclaiming process: Review the displayed list of available storage devices. The Reclaimable Space column shows how much space can be reclaimed from each entry. Select a disk or partition to reclaim space. Use the Shrink button to use free space on a partition while preserving the existing data. Use the Delete button to delete that partition or all partitions on a selected disk including existing data. Use the Delete all button to delete all existing partitions on all disks including existing data and make this space available to install Red Hat Enterprise Linux. Click Reclaim space to apply the changes and return to graphical installations. No disk changes are made until you click Begin Installation on the Installation Summary window. The Reclaim Space dialog only marks partitions for resizing or deletion; no action is performed. Additional resources How to use dm-crypt on IBM Z, LinuxONE and with the PAES cipher 10.2.2. Special cases during installation destination configuration Following are some special cases to consider when you are configuring installation destinations: Some BIOS types do not support booting from a RAID card. In these instances, the /boot partition must be created on a partition outside of the RAID array, such as on a separate disk. It is necessary to use an internal disk for partition creation with problematic RAID cards. A /boot partition is also necessary for software RAID setups. If you choose to partition your system automatically, you should manually edit your /boot partition. To configure the Red Hat Enterprise Linux boot loader to chain load from a different boot loader, you must specify the boot drive manually by clicking the Full disk summary and bootloader link from the Installation Destination window. When you install Red Hat Enterprise Linux on a system with both multipath and non-multipath storage devices, the automatic partitioning layout in the installation program creates volume groups that contain a mix of multipath and non-multipath devices. This defeats the purpose of multipath storage. Select either multipath or non-multipath devices on the Installation Destination window. Alternatively, proceed to manual partitioning. 10.2.3. Configuring boot loader Red Hat Enterprise Linux uses GRand Unified Bootloader version 2 ( GRUB2 ) as the boot loader for AMD64 and Intel 64, IBM Power Systems, and ARM. For 64-bit IBM Z, the zipl boot loader is used. The boot loader is the first program that runs when the system starts and is responsible for loading and transferring control to an operating system. GRUB2 can boot any compatible operating system (including Microsoft Windows) and can also use chain loading to transfer control to other boot loaders for unsupported operating systems. Warning Installing GRUB2 may overwrite your existing boot loader. If an operating system is already installed, the Red Hat Enterprise Linux installation program attempts to automatically detect and configure the boot loader to start the other operating system. If the boot loader is not detected, you can manually configure any additional operating systems after you finish the installation. If you are installing a Red Hat Enterprise Linux system with more than one disk, you might want to manually specify the disk where you want to install the boot loader. Procedure From the Installation Destination window, click the Full disk summary and bootloader link. The Selected Disks dialog box opens. The boot loader is installed on the device of your choice, or on a UEFI system; the EFI system partition is created on the target device during guided partitioning. To change the boot device, select a device from the list and click Set as Boot Device . You can set only one device as the boot device. To disable a new boot loader installation, select the device currently marked for boot and click Do not install boot loader . This ensures GRUB2 is not installed on any device. Warning If you choose not to install a boot loader, you cannot boot the system directly and you must use another boot method, such as a standalone commercial boot loader application. Use this option only if you have another way to boot your system. The boot loader may also require a special partition to be created, depending on if your system uses BIOS or UEFI firmware, or if the boot drive has a GUID Partition Table (GPT) or a Master Boot Record (MBR, also known as msdos ) label. If you use automatic partitioning, the installation program creates the partition. 10.2.4. Storage device selection The storage device selection window lists all storage devices that the installation program can access. Depending on your system and available hardware, some tabs might not be displayed. The devices are grouped under the following tabs: Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. The installation program only detects multipath storage devices with serial numbers that are 16 or 32 characters long. Other SAN Devices Devices available on a Storage Area Network (SAN). Firmware RAID Storage devices attached to a firmware RAID controller. NVDIMM Devices Under specific circumstances, Red Hat Enterprise Linux 9 can boot and run from (NVDIMM) devices in sector mode on the Intel 64 and AMD64 architectures. IBM Z Devices Storage devices, or Logical Units (LUNs), DASD, attached through the zSeries Linux FCP (Fiber Channel Protocol) driver. 10.2.5. Filtering storage devices In the storage device selection window you can filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN). Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click the Search by tab to search by port, target, LUN, or WWID. Searching by WWID or LUN requires additional values in the corresponding input text fields. Select the option that you require from the Search drop-down menu. Click Find to start the search. Each device is presented on a separate row with a corresponding check box. Select the check box to enable the device that you require during the installation process. Later in the installation process you can choose to install Red Hat Enterprise Linux on any of the selected devices, and you can choose to mount any of the other selected devices as part of the installed system automatically. Selected devices are not automatically erased by the installation process and selecting a device does not put the data stored on the device at risk. Note You can add devices to the system after installation by modifying the /etc/fstab file. Click Done to return to the Installation Destination window. Any storage devices that you do not select are hidden from the installation program entirely. To chain load the boot loader from a different boot loader, select all the devices present. 10.2.6. Using advanced storage options To use an advanced storage device, you can configure an iSCSI (SCSI over TCP/IP) target or FCoE (Fibre Channel over Ethernet) SAN (Storage Area Network). To use iSCSI storage devices for the installation, the installation program must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a user name and password for Challenge Handshake Authentication Protocol (CHAP) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached (reverse CHAP), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP. Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the user name and password are different for CHAP authentication and reverse CHAP authentication. Repeat the iSCSI discovery and iSCSI login steps to add all required iSCSI storage. You cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. 10.2.6.1. Discovering and starting an iSCSI session The Red Hat Enterprise Linux installer can discover and log in to iSCSI disks in two ways: iSCSI Boot Firmware Table (iBFT) When the installer starts, it checks if the BIOS or add-on boot ROMs of the system support iBFT. It is a BIOS extension for systems that can boot from iSCSI. If the BIOS supports iBFT, the installer reads the iSCSI target information for the configured boot disk from the BIOS and logs in to this target, making it available as an installation target. To automatically connect to an iSCSI target, activate a network device for accessing the target. To do so, use the ip=ibft boot option. For more information, see Network boot options . Discover and add iSCSI targets manually You can discover and start an iSCSI session to identify available iSCSI targets (network storage devices) in the installer's graphical user interface. Prerequisites The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add iSCSI target . The Add iSCSI Storage Target window opens. Important You cannot place the /boot partition on iSCSI targets that you have manually added using this method - an iSCSI target containing a /boot partition must be configured for use with iBFT. However, in instances where the installed system is expected to boot from iSCSI with iBFT configuration provided by a method other than firmware iBFT, for example using iPXE, you can remove the /boot partition restriction using the inst.nonibftiscsiboot installer boot option. Enter the IP address of the iSCSI target in the Target IP Address field. Type a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN entry contains the following information: The string iqn. (note the period). A date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. Your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage . A colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example :diskarrays-sn-a8675309 . A complete IQN is as follows: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 . The installation program pre populates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information about IQNs, see 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from tools.ietf.org and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from tools.ietf.org. Select the Discovery Authentication Type drop-down menu to specify the type of authentication to use for iSCSI discovery. The following options are available: No credentials CHAP pair CHAP pair and a reverse pair Do one of the following: If you selected CHAP pair as the authentication type, enter the user name and password for the iSCSI target in the CHAP Username and CHAP Password fields. If you selected CHAP pair and a reverse pair as the authentication type, enter the user name and password for the iSCSI target in the CHAP Username and CHAP Password field, and the user name and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Optional: Select the Bind targets to network interfaces check box. Click Start Discovery . The installation program attempts to discover an iSCSI target based on the information provided. If discovery succeeds, the Add iSCSI Storage Target window displays a list of all iSCSI nodes discovered on the target. Select the check boxes for the node that you want to use for installation. The Node login authentication type menu contains the same options as the Discovery Authentication Type menu. However, if you need credentials for discovery authentication, use the same credentials to log in to a discovered node. Click the additional Use the credentials from discovery drop-down menu. When you provide the proper credentials, the Log In button becomes available. Click Log In to initiate an iSCSI session. While the installer uses iscsiadm to find and log into iSCSI targets, iscsiadm automatically stores any information about these targets in the iscsiadm iSCSI database. The installer then copies this database to the installed system and marks any iSCSI targets that are not used for root partition, so that the system automatically logs in to them when it starts. If the root partition is placed on an iSCSI target, initrd logs into this target and the installer does not include this target in start up scripts to avoid multiple attempts to log into the same target. 10.2.6.2. Configuring FCoE parameters You can discover the FCoE (Fibre Channel over Ethernet) devices from the Installation Destination window by configuring the FCoE parameters accordingly. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add FCoE SAN . A dialog box opens for you to configure network interfaces for discovering FCoE storage devices. Select a network interface that is connected to an FCoE switch in the NIC drop-down menu. Click Add FCoE disk(s) to scan the network for SAN devices. Select the required check boxes: Use DCB: Data Center Bridging (DCB) is a set of enhancements to the Ethernet protocols designed to increase the efficiency of Ethernet connections in storage networks and clusters. Select the check box to enable or disable the installation program's awareness of DCB. Enable this option only for network interfaces that require a host-based DCBX client. For configurations on interfaces that use a hardware DCBX client, disable the check box. Use auto vlan: Auto VLAN is enabled by default and indicates whether VLAN discovery should be performed. If this check box is enabled, then the FIP (FCoE Initiation Protocol) VLAN discovery protocol runs on the Ethernet interface when the link configuration has been validated. If they are not already configured, network interfaces for any discovered FCoE VLANs are automatically created and FCoE instances are created on the VLAN interfaces. Discovered FCoE devices are displayed under the Other SAN Devices tab in the Installation Destination window. 10.2.6.3. Configuring DASD storage devices You can discover and configure the DASD storage devices from the Installation Destination window. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add DASD ECKD . The Add DASD Storage Target dialog box opens and prompts you to specify a device number, such as 0.0.0204 , and attach additional DASDs that were not detected when the installation started. Type the device number of the DASD that you want to attach in the Device number field. Click Start Discovery . If a DASD with the specified device number is found and if it is not already attached, the dialog box closes and the newly-discovered drives appear in the list of drives. You can then select the check boxes for the required devices and click Done . The new DASDs are available for selection, marked as DASD device 0.0. xxxx in the Local Standard Disks section of the Installation Destination window. If you entered an invalid device number, or if the DASD with the specified device number is already attached to the system, an error message appears in the dialog box, explaining the error and prompting you to try again with a different device number. Additional resources Preparing an ECKD type DASD for use 10.2.6.4. Configuring FCP devices FCP devices enable 64-bit IBM Z to use SCSI devices rather than, or in addition to, Direct Access Storage Device (DASD) devices. FCP devices provide a switched fabric topology that enables 64-bit IBM Z systems to use SCSI LUNs as disk devices in addition to traditional DASD devices. Prerequisites The Installation Summary window is open. For an FCP-only installation, you have removed the DASD= option from the CMS configuration file or the rd.dasd= option from the parameter file to indicate that no DASD is present. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click Add ZFCP LUN . The Add zFCP Storage Target dialog box opens allowing you to add a FCP (Fibre Channel Protocol) storage device. 64-bit IBM Z requires that you enter any FCP device manually so that the installation program can activate FCP LUNs. You can enter FCP devices either in the graphical installation, or as a unique parameter entry in the parameter or CMS configuration file. The values that you enter must be unique to each site that you configure. Type the 4 digit hexadecimal device number in the Device number field. When installing RHEL-9.0 or older releases or if the zFCP device is not configured in NPIV mode, or when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter, provide the following values: Type the 16 digit hexadecimal World Wide Port Number (WWPN) in the WWPN field. Type the 16 digit hexadecimal FCP LUN identifier in the LUN field. Click Start Discovery to connect to the FCP device. The newly-added devices are displayed in the IBM Z tab of the Installation Destination window. Use only lower-case letters in hex values. If you enter an incorrect value and click Start Discovery , the installation program displays a warning. You can edit the configuration information and retry the discovery attempt. For more information about these values, consult the hardware documentation and check with your system administrator. 10.2.7. Installing to an NVDIMM device Non-Volatile Dual In-line Memory Module (NVDIMM) devices combine the performance of RAM with disk-like data persistence when no power is supplied. Under specific circumstances, Red Hat Enterprise Linux 9 can boot and run from NVDIMM devices. 10.2.7.1. Criteria for using an NVDIMM device as an installation target You can install Red Hat Enterprise Linux 9 to Non-Volatile Dual In-line Memory Module (NVDIMM) devices in sector mode on the Intel 64 and AMD64 architectures, supported by the nd_pmem driver. Conditions for using an NVDIMM device as storage To use an NVDIMM device as storage, the following conditions must be satisfied: The architecture of the system is Intel 64 or AMD64. The NVDIMM device is configured to sector mode. The installation program can reconfigure NVDIMM devices to this mode. The NVDIMM device must be supported by the nd_pmem driver. Conditions for booting from an NVDIMM Device Booting from an NVDIMM device is possible under the following conditions: All conditions for using the NVDIMM device as storage are satisfied. The system uses UEFI. The NVDIMM device must be supported by firmware available on the system, or by an UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The NVDIMM device must be made available under a namespace. Utilize the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. 10.2.7.2. Configuring an NVDIMM device using the graphical installation mode A Non-Volatile Dual In-line Memory Module (NVDIMM) device must be properly configured for use by Red Hat Enterprise Linux 9 using the graphical installation. Warning Reconfiguration of a NVDIMM device process destroys any data stored on the device. Prerequisites A NVDIMM device is present on the system and satisfies all the other conditions for usage as an installation target. The installation has booted and the Installation Summary window is open. Procedure From the Installation Summary window, click Installation Destination . The Installation Destination window opens, listing all available drives. Under the Specialized & Network Disks section, click Add a disk . The storage devices selection window opens. Click the NVDIMM Devices tab. To reconfigure a device, select it from the list. If a device is not listed, it is not in sector mode. Click Reconfigure NVDIMM . A reconfiguration dialog opens. Enter the sector size that you require and click Start Reconfiguration . The supported sector sizes are 512 and 4096 bytes. When reconfiguration completes click OK . Select the device check box. Click Done to return to the Installation Destination window. The NVDIMM device that you reconfigured is displayed in the Specialized & Network Disks section. Click Done to return to the Installation Summary window. The NVDIMM device is now available for you to select as an installation target. Additionally, if the device meets the requirements for booting, you can set the device as a boot device. 10.3. Configuring the root user and creating local accounts 10.3.1. Configuring a root password You must configure a root password to finish the installation process and to log in to the administrator (also known as superuser or root) account that is used for system administration tasks. These tasks include installing and updating software packages and changing system-wide configuration such as network and firewall settings, storage options, and adding or modifying users, groups and file permissions. To gain root privileges to the installed systems, you can either use a root account or create a user account with administrative privileges (member of the wheel group). The root account is always created during the installation. Switch to the administrator account only when you need to perform a task that requires administrator access. Warning The root account has complete control over the system. If unauthorized personnel gain access to the account, they can access or delete users' personal files. Procedure From the Installation Summary window, select User Settings > Root Password . The Root Password window opens. Type your password in the Root Password field. The requirements for creating a strong root password are: Must be at least eight characters long May contain numbers, letters (upper and lower case) and symbols Is case-sensitive Type the same password in the Confirm field. Optional: Select the Lock root account option to disable the root access to the system. Optional: Select the Allow root SSH login with password option to enable SSH access (with password) to this system as a root user. By default the password-based SSH root access is disabled. Click Done to confirm your root password and return to the Installation Summary window. If you proceed with a weak password, you must click Done twice. 10.3.2. Creating a user account Create a user account to finish the installation. If you do not create a user account, you must log in to the system as root directly, which is not recommended. Procedure On the Installation Summary window, select User Settings > User Creation . The Create User window opens. Type the user account name in to the Full name field, for example: John Smith. Type the username in to the User name field, for example: jsmith. The User name is used to log in from a command line; if you install a graphical environment, then your graphical login manager uses the Full name . Select the Make this user administrator check box if the user requires administrative rights (the installation program adds the user to the wheel group ). An administrator user can use the sudo command to perform tasks that are only available to root using the user password, instead of the root password. This may be more convenient, but it can also cause a security risk. Select the Require a password to use this account check box. If you give administrator privileges to a user, ensure the account is password protected. Never give a user administrator privileges without assigning a password to the account. Type a password into the Password field. Type the same password into the Confirm password field. Click Done to apply the changes and return to the Installation Summary window. 10.3.3. Editing advanced user settings This procedure describes how to edit the default settings for the user account in the Advanced User Configuration dialog box. Procedure On the Create User window, click Advanced . Edit the details in the Home directory field, if required. The field is populated by default with /home/ username . In the User and Groups IDs section you can: Select the Specify a user ID manually check box and use + or - to enter the required value. The default value is 1000. User IDs (UIDs) 0-999 are reserved by the system so they cannot be assigned to a user. Select the Specify a group ID manually check box and use + or - to enter the required value. The default group name is the same as the user name, and the default Group ID (GID) is 1000. GIDs 0-999 are reserved by the system so they can not be assigned to a user group. Specify additional groups as a comma-separated list in the Group Membership field. Groups that do not already exist are created; you can specify custom GIDs for additional groups in parentheses. If you do not specify a custom GID for a new group, the new group receives a GID automatically. The user account created always has one default group membership (the user's default group with an ID set in the Specify a group ID manually field). Click Save Changes to apply the updates and return to the Create User window. 10.4. Configuring manual partitioning You can use manual partitioning to configure your disk partitions and mount points and define the file system that Red Hat Enterprise Linux is installed on. Before installation, you should consider whether you want to use partitioned or unpartitioned disk devices. For more information about the advantages and disadvantages to using partitioning on LUNs, either directly or with LVM, see the Red Hat Knowledgebase solution advantages and disadvantages to using partitioning on LUNs . You have different partitioning and storage options available, including Standard Partitions , LVM , and LVM thin provisioning . These options provide various benefits and configurations for managing your system's storage effectively. Standard partition A standard partition contains a file system or swap space. Standard partitions are most commonly used for /boot and the BIOS Boot and EFI System partitions . You can use the LVM logical volumes in most other uses. LVM Choosing LVM (or Logical Volume Management) as the device type creates an LVM logical volume. LVM improves performance when using physical disks, and it allows for advanced setups such as using multiple physical disks for one mount point, and setting up software RAID for increased performance, reliability, or both. LVM thin provisioning Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can dynamically expand the pool when needed for cost-effective allocation of storage space. An installation of Red Hat Enterprise Linux requires a minimum of one partition but uses at least the following partitions or volumes: / , /home , /boot , and swap . You can also create additional partitions and volumes as you require. To prevent data loss it is recommended that you back up your data before proceeding. If you are upgrading or creating a dual-boot system, you should back up any data you want to keep on your storage devices. 10.4.1. Recommended partitioning scheme Create separate file systems at the following mount points. However, if required, you can also create the file systems at /usr , /var , and /tmp mount points. /boot / (root) /home swap /boot/efi PReP This partition scheme is recommended for bare metal deployments and it does not apply to virtual and cloud deployments. /boot partition - recommended size at least 1 GiB The partition mounted on /boot contains the operating system kernel, which allows your system to boot Red Hat Enterprise Linux 9, along with files used during the bootstrap process. Due to the limitations of most firmwares, create a small partition to hold these. In most scenarios, a 1 GiB boot partition is adequate. Unlike other mount points, using an LVM volume for /boot is not possible - /boot must be located on a separate disk partition. If you have a RAID card, be aware that some BIOS types do not support booting from the RAID card. In such a case, the /boot partition must be created on a partition outside of the RAID array, such as on a separate disk. Warning Normally, the /boot partition is created automatically by the installation program. However, if the / (root) partition is larger than 2 TiB and (U)EFI is used for booting, you need to create a separate /boot partition that is smaller than 2 TiB to boot the machine successfully. Ensure the /boot partition is located within the first 2 TB of the disk while manual partitioning. Placing the /boot partition beyond the 2 TB boundary might result in a successful installation, but the system fails to boot because BIOS cannot read the /boot partition beyond this limit. root - recommended size of 10 GiB This is where " / ", or the root directory, is located. The root directory is the top-level of the directory structure. By default, all files are written to this file system unless a different file system is mounted in the path being written to, for example, /boot or /home . While a 5 GiB root file system allows you to install a minimal installation, it is recommended to allocate at least 10 GiB so that you can install as many package groups as you want. Do not confuse the / directory with the /root directory. The /root directory is the home directory of the root user. The /root directory is sometimes referred to as slash root to distinguish it from the root directory. /home - recommended size at least 1 GiB To store user data separately from system data, create a dedicated file system for the /home directory. Base the file system size on the amount of data that is stored locally, number of users, and so on. You can upgrade or reinstall Red Hat Enterprise Linux 9 without erasing user data files. If you select automatic partitioning, it is recommended to have at least 55 GiB of disk space available for the installation, to ensure that the /home file system is created. swap partition - recommended size at least 1 GiB Swap file systems support virtual memory; data is written to a swap file system when there is not enough RAM to store the data your system is processing. Swap size is a function of system memory workload, not total system memory and therefore is not equal to the total system memory size. It is important to analyze what applications a system will be running and the load those applications will serve in order to determine the system memory workload. Application providers and developers can provide guidance. When the system runs out of swap space, the kernel terminates processes as the system RAM memory is exhausted. Configuring too much swap space results in storage devices being allocated but idle and is a poor use of resources. Too much swap space can also hide memory leaks. The maximum size for a swap partition and other additional information can be found in the mkswap(8) manual page. The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and if you want sufficient memory for your system to hibernate. If you let the installation program partition your system automatically, the swap partition size is established using these guidelines. Automatic partitioning setup assumes hibernation is not in use. The maximum size of the swap partition is limited to 10 percent of the total size of the disk, and the installation program cannot create swap partitions more than 1TiB. To set up enough swap space to allow for hibernation, or if you want to set the swap partition size to more than 10 percent of the system's storage space, or more than 1TiB, you must edit the partitioning layout manually. Table 10.1. Recommended system swap space Amount of RAM in the system Recommended swap space Recommended swap space if allowing for hibernation Less than 2 GiB 2 times the amount of RAM 3 times the amount of RAM 2 GiB - 8 GiB Equal to the amount of RAM 2 times the amount of RAM 8 GiB - 64 GiB 4 GiB to 0.5 times the amount of RAM 1.5 times the amount of RAM More than 64 GiB Workload dependent (at least 4GiB) Hibernation not recommended /boot/efi partition - recommended size of 200 MiB UEFI-based AMD64, Intel 64, and 64-bit ARM require a 200 MiB EFI system partition. The recommended minimum size is 200 MiB, the default size is 600 MiB, and the maximum size is 600 MiB. BIOS systems do not require an EFI system partition. At the border between each range, for example, a system with 2 GiB, 8 GiB, or 64 GiB of system RAM, discretion can be exercised with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space can lead to better performance. Distributing swap space over multiple storage devices - particularly on systems with fast drives, controllers and interfaces - also improves swap space performance. Many systems have more partitions and volumes than the minimum required. Choose partitions based on your particular system needs. If you are unsure about configuring partitions, accept the automatic default partition layout provided by the installation program. Note Only assign storage capacity to those partitions you require immediately. You can allocate free space at any time, to meet needs as they occur. PReP boot partition - recommended size of 4 to 8 MiB When installing Red Hat Enterprise Linux on IBM Power System servers, the first partition of the disk should include a PReP boot partition. This contains the GRUB boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux. 10.4.2. Supported hardware storage It is important to understand how storage technologies are configured and how support for them may have changed between major versions of Red Hat Enterprise Linux. Hardware RAID Any RAID functions provided by the mainboard of your computer, or attached controller cards, need to be configured before you begin the installation process. Each active RAID array appears as one drive within Red Hat Enterprise Linux. Software RAID On systems with more than one disk, you can use the Red Hat Enterprise Linux installation program to operate several of the drives as a Linux software RAID array. With a software RAID array, RAID functions are controlled by the operating system rather than the dedicated hardware. Note When a pre-existing RAID array's member devices are all unpartitioned disks/drives, the installation program treats the array as a disk and there is no method to remove the array. USB Disks You can connect and configure external USB storage after installation. Most devices are recognized by the kernel, but some devices may not be recognized. If it is not a requirement to configure these disks during installation, disconnect them to avoid potential problems. NVDIMM devices To use a Non-Volatile Dual In-line Memory Module (NVDIMM) device as storage, the following conditions must be satisfied: The architecture of the system is Intel 64 or AMD64. The device is configured to sector mode. Anaconda can reconfigure NVDIMM devices to this mode. The device must be supported by the nd_pmem driver. Booting from an NVDIMM device is possible under the following additional conditions: The system uses UEFI. The device must be supported by firmware available on the system, or by a UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The device must be made available under a namespace. To take advantage of the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. Note The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. Considerations for Intel BIOS RAID Sets Red Hat Enterprise Linux uses mdraid for installing on Intel BIOS RAID sets. These sets are automatically detected during the boot process and their device node paths can change across several booting processes. Replace device node paths (such as /dev/sda ) with file system labels or device UUIDs. You can find the file system labels and device UUIDs using the blkid command. 10.4.3. Starting manual partitioning You can partition the disks based on your requirements by using manual partitioning. Prerequisites The Installation Summary screen is open. All disks are available to the installation program. Procedure Select disks for installation: Click Installation Destination to open the Installation Destination window. Select the disks that you require for installation by clicking the corresponding icon. A selected disk has a check-mark displayed on it. Under Storage Configuration , select the Custom radio-button. Optional: To enable storage encryption with LUKS, select the Encrypt my data check box. Click Done . If you selected to encrypt the storage, a dialog box for entering a disk encryption passphrase opens. Type in the LUKS passphrase: Enter the passphrase in the two text fields. To switch keyboard layout, use the keyboard icon. Warning In the dialog box for entering the passphrase, you cannot change the keyboard layout. Select the English keyboard layout to enter the passphrase in the installation program. Click Save Passphrase . The Manual Partitioning window opens. Detected mount points are listed in the left-hand pane. The mount points are organized by detected operating system installations. As a result, some file systems may be displayed multiple times if a partition is shared among several installations. Select the mount points in the left pane; the options that can be customized are displayed in the right pane. Optional: If your system contains existing file systems, ensure that enough space is available for the installation. To remove any partitions, select them in the list and click the - button. The dialog has a check box that you can use to remove all other partitions used by the system to which the deleted partition belongs. Optional: If there are no existing partitions and you want to create a set of partitions as a starting point, select your preferred partitioning scheme from the left pane (default for Red Hat Enterprise Linux is LVM) and click the Click here to create them automatically link. Note A /boot partition, a / (root) volume, and a swap volume proportional to the size of the available storage are created and listed in the left pane. These are the file systems for a typical installation, but you can add additional file systems and mount points. Click Done to confirm any changes and return to the Installation Summary window. 10.4.4. Supported file systems When configuring manual partitioning, you can optimize performance, ensure compatibility, and effectively manage disk space by utilizing the various file systems and partition types available in Red Hat Enterprise Linux. xfs XFS is a highly scalable, high-performance file system that supports file systems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes), and directory structures containing tens of millions of entries. XFS also supports metadata journaling, which facilitates quicker crash recovery. The maximum supported size of a single XFS file system is 500 TB. XFS is the default file system on Red Hat Enterprise Linux. The XFS filesystem cannot be shrunk to get free space. ext4 The ext4 file system is based on the ext3 file system and features a number of improvements. These include support for larger file systems and larger files, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling. The maximum supported size of a single ext4 file system is 50 TB. ext3 The ext3 file system is based on the ext2 file system and has one main advantage - journaling. Using a journaling file system reduces the time spent recovering a file system after it terminates unexpectedly, as there is no need to check the file system for metadata consistency by running the fsck utility every time. ext2 An ext2 file system supports standard Unix file types, including regular files, directories, or symbolic links. It provides the ability to assign long file names, up to 255 characters. swap Swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. vfat The VFAT file system is a Linux file system that is compatible with Microsoft Windows long file names on the FAT file system. Note Support for the VFAT file system is not available for Linux system partitions. For example, / , /var , /usr and so on. BIOS Boot A very small partition required for booting from a device with a GUID partition table (GPT) on BIOS systems and UEFI systems in BIOS compatibility mode. EFI System Partition A small partition required for booting a device with a GUID partition table (GPT) on a UEFI system. PReP This small boot partition is located on the first partition of the disk. The PReP boot partition contains the GRUB2 boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux. 10.4.5. Adding a mount point file system You can add multiple mount point file systems. You can use any of the file systems and partition types available, such as XFS, ext4, ext3, ext2, swap, VFAT, and specific partitions like BIOS Boot, EFI System Partition, and PReP to effectively configure your system's storage. Prerequisites You have planned your partitions. Ensure you haven't specified mount points at paths with symbolic links, such as /var/mail , /usr/tmp , /lib , /sbin , /lib64 , and /bin . The payload, including RPM packages, depends on creating symbolic links to specific directories. Procedure Click + to create a new mount point file system. The Add a New Mount Point dialog opens. Select one of the preset paths from the Mount Point drop-down menu or type your own; for example, select / for the root partition or /boot for the boot partition. Enter the size of the file system in to the Desired Capacity field; for example, 2GiB . If you do not specify a value in Desired Capacity , or if you specify a size bigger than available space, then all remaining free space is used. Click Add mount point to create the partition and return to the Manual Partitioning window. 10.4.6. Configuring storage for a mount point file system You can set the partitioning scheme for each mount point that was created manually. The available options are Standard Partition , LVM , and LVM Thin Provisioning . Btfrs support has been removed in Red Hat Enterprise Linux 9. Note The /boot partition is always located on a standard partition, regardless of the value selected. Procedure To change the devices that a single non-LVM mount point should be located on, select the required mount point from the left-hand pane. Under the Device(s) heading, click Modify . The Configure Mount Point dialog opens. Select one or more devices and click Select to confirm your selection and return to the Manual Partitioning window. Click Update Settings to apply the changes. In the lower left-hand side of the Manual Partitioning window, click the storage device selected link to open the Selected Disks dialog and review disk information. Optional: Click the Rescan button (circular arrow button) to refresh all local disks and partitions; this is only required after performing advanced partition configuration outside the installation program. Clicking the Rescan Disks button resets all configuration changes made in the installation program. 10.4.7. Customizing a mount point file system You can customize a partition or volume if you want to set specific settings. If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex as these directories contain critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system is unable to boot, or hangs with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var , not to directories below them. For example, a separate partition for /var/www works successfully. Procedure From the left pane, select the mount point. Figure 10.1. Customizing Partitions From the right-hand pane, you can customize the following options: Enter the file system mount point into the Mount Point field. For example, if a file system is the root file system, enter / ; enter /boot for the /boot file system, and so on. For a swap file system, do not set the mount point as setting the file system type to swap is sufficient. Enter the size of the file system in the Desired Capacity field. You can use common size units such as KiB or GiB. The default is MiB if you do not set any other unit. Select the device type that you require from the drop-down Device Type menu: Standard Partition , LVM , or LVM Thin Provisioning . Note RAID is available only if two or more disks are selected for partitioning. If you choose RAID , you can also set the RAID Level . Similarly, if you select LVM , you can specify the Volume Group . Select the Encrypt check box to encrypt the partition or volume. You must set a password later in the installation program. The LUKS Version drop-down menu is displayed. Select the LUKS version that you require from the drop-down menu. Select the appropriate file system type for this partition or volume from the File system drop-down menu. Note Support for the VFAT file system is not available for Linux system partitions. For example, / , /var , /usr , and so on. Select the Reformat check box to format an existing partition, or clear the Reformat check box to retain your data. The newly-created partitions and volumes must be reformatted, and the check box cannot be cleared. Type a label for the partition in the Label field. Use labels to easily recognize and address individual partitions. Type a name in the Name field. The standard partitions are named automatically when they are created and you cannot edit the names of standard partitions. For example, you cannot edit the /boot name sda1 . Click Update Settings to apply your changes and if required, select another partition to customize. Changes are not applied until you click Begin Installation from the Installation Summary window. Optional: Click Reset All to discard your partition changes. Click Done when you have created and customized all file systems and mount points. If you choose to encrypt a file system, you are prompted to create a passphrase. A Summary of Changes dialog box opens, displaying a summary of all storage actions for the installation program. Click Accept Changes to apply the changes and return to the Installation Summary window. 10.4.8. Preserving the /home directory In a Red Hat Enterprise Linux 9 graphical installation, you can preserve the /home directory that was used on your RHEL 8 system. Preserving /home is only possible if the /home directory is located on a separate /home partition on your RHEL 8 system. Preserving the /home directory that includes various configuration settings, makes it possible that the GNOME Shell environment on the new Red Hat Enterprise Linux 9 system is set in the same way as it was on your RHEL 8 system. Note that this applies only for users on Red Hat Enterprise Linux 9 with the same user name and ID as on the RHEL 8 system. Prerequisites You have RHEL 8 installed on your computer. The /home directory is located on a separate /home partition on your RHEL 8 system. The Red Hat Enterprise Linux 9 Installation Summary window is open. Procedure Click Installation Destination to open the Installation Destination window. Under Storage Configuration , select the Custom radio button. Click Done . Click Done , the Manual Partitioning window opens. Choose the /home partition, fill in /home under Mount Point: and clear the Reformat check box. Figure 10.2. Ensuring that /home is not formatted Optional: You can also customize various aspects of the /home partition required for your Red Hat Enterprise Linux 9 system as described in Customizing a mount point file system . However, to preserve /home from your RHEL 8 system, it is necessary to clear the Reformat check box. After you customized all partitions according to your requirements, click Done . The Summary of changes dialog box opens. Verify that the Summary of changes dialog box does not show any change for /home . This means that the /home partition is preserved. Click Accept Changes to apply the changes, and return to the Installation Summary window. 10.4.9. Creating a software RAID during the installation Redundant Arrays of Independent Disks (RAID) devices are constructed from multiple storage devices that are arranged to provide increased performance and, in some configurations, greater fault tolerance. A RAID device is created in one step and disks are added or removed as necessary. You can configure one RAID partition for each physical disk in your system, so that the number of disks available to the installation program determines the levels of RAID device available. For example, if your system has two disks, you cannot create a RAID 10 device, as it requires a minimum of three separate disks. To optimize your system's storage performance and reliability, RHEL supports software RAID 0 , RAID 1 , RAID 4 , RAID 5 , RAID 6 , and RAID 10 types with LVM and LVM Thin Provisioning to set up storage on the installed system. Note On 64-bit IBM Z, the storage subsystem uses RAID transparently. You do not have to configure software RAID manually. Prerequisites You have selected two or more disks for installation before RAID configuration options are visible. Depending on the RAID type you want to create, at least two disks are required. You have created a mount point. By configuring a mount point, you can configure the RAID device. You have selected the Custom radio button on the Installation Destination window. Procedure From the left pane of the Manual Partitioning window, select the required partition. Under the Device(s) section, click Modify . The Configure Mount Point dialog box opens. Select the disks that you want to include in the RAID device and click Select . Click the Device Type drop-down menu and select RAID . Click the File System drop-down menu and select your preferred file system type. Click the RAID Level drop-down menu and select your preferred level of RAID. Click Update Settings to save your changes. Click Done to apply the settings to return to the Installation Summary window. Additional resources Creating a RAID LV with DM integrity Managing RAID 10.4.10. Creating an LVM logical volume Logical Volume Manager (LVM) presents a simple logical view of underlying physical storage space, such as disks or LUNs. Partitions on physical storage are represented as physical volumes that you can group together into volume groups. You can divide each volume group into multiple logical volumes, each of which is analogous to a standard disk partition. Therefore, LVM logical volumes function as partitions that can span multiple physical disks. Important LVM configuration is available only in the graphical installation program. During text-mode installation, LVM configuration is not available. To create an LVM configuration, press Ctrl + Alt + F2 to use a shell prompt in a different virtual console. You can run vgcreate and lvm commands in this shell. To return to the text-mode installation, press Ctrl + Alt + F1 . Procedure From the Manual Partitioning window, create a new mount point by using any of the following options: Use the Click here to create them automatically option or click the + button. Select Mount Point from the drop-down list or enter manually. Enter the size of the file system in to the Desired Capacity field; for example, 70 GiB for / , 1 GiB for /boot . Note: Skip this step to use the existing mount point. Select the mount point. Select LVM in the drop-down menu. The Volume Group drop-down menu is displayed with the newly-created volume group name. Note You cannot specify the size of the volume group's physical extents in the configuration dialog. The size is always set to the default value of 4 MiB. If you want to create a volume group with different physical extents, you must create it manually by switching to an interactive shell and using the vgcreate command, or use a Kickstart file with the volgroup --pesize= size command. For more information about Kickstart, see the Automatically installing RHEL . Click Done to return to the Installation Summary window. Additional resources Configuring and managing logical volumes 10.4.11. Configuring an LVM logical volume You can configure a newly-created LVM logical volume based on your requirements. Warning Placing the /boot partition on an LVM volume is not supported. Procedure From the Manual Partitioning window, create a mount point by using any of the following options: Use the Click here to create them automatically option or click the + button. Select Mount Point from the drop-down list or enter manually. Enter the size of the file system in to the Desired Capacity field; for example, 70 GiB for / , 1 GiB for /boot . Note: Skip this step to use the existing mount point. Select the mount point. Click the Device Type drop-down menu and select LVM . The Volume Group drop-down menu is displayed with the newly-created volume group name. Click Modify to configure the newly-created volume group. The Configure Volume Group dialog box opens. Note You cannot specify the size of the volume group's physical extents in the configuration dialog. The size is always set to the default value of 4 MiB. If you want to create a volume group with different physical extents, you must create it manually by switching to an interactive shell and using the vgcreate command, or use a Kickstart file with the volgroup --pesize= size command. For more information, see the Automatically installing RHEL document. Optional: From the RAID Level drop-down menu, select the RAID level that you require. The available RAID levels are the same as with actual RAID devices. Select the Encrypt check box to mark the volume group for encryption. From the Size policy drop-down menu, select any of the following size policies for the volume group: The available policy options are: Automatic The size of the volume group is set automatically so that it is large enough to contain the configured logical volumes. This is optimal if you do not need free space within the volume group. As large as possible The volume group is created with maximum size, regardless of the size of the configured logical volumes it contains. This is optimal if you plan to keep most of your data on LVM and later need to increase the size of some existing logical volumes, or if you need to create additional logical volumes within this group. Fixed You can set an exact size of the volume group. Any configured logical volumes must then fit within this fixed size. This is useful if you know exactly how large you need the volume group to be. Click Save to apply the settings and return to the Manual Partitioning window. Click Update Settings to save your changes. Click Done to return to the Installation Summary window. 10.4.12. Advice on partitions There is no best way to partition every system; the optimal setup depends on how you plan to use the system being installed. However, the following tips may help you find the optimal layout for your needs: Create partitions that have specific requirements first, for example, if a particular partition must be on a specific disk. Consider encrypting any partitions and volumes which might contain sensitive data. Encryption prevents unauthorized people from accessing the data on the partitions, even if they have access to the physical storage device. In most cases, you should at least encrypt the /home partition, which contains user data. In some cases, creating separate mount points for directories other than / , /boot and /home may be useful; for example, on a server running a MySQL database, having a separate mount point for /var/lib/mysql allows you to preserve the database during a re-installation without having to restore it from backup afterward. However, having unnecessary separate mount points will make storage administration more difficult. Some special restrictions apply to certain directories with regards to which partitioning layouts can be placed. Notably, the /boot directory must always be on a physical partition (not on an LVM volume). If you are new to Linux, consider reviewing the Linux Filesystem Hierarchy Standard for information about various system directories and their contents. Each kernel requires approximately: 60MiB (initrd 34MiB, 11MiB vmlinuz, and 5MiB System.map) For rescue mode: 100MiB (initrd 76MiB, 11MiB vmlinuz, and 5MiB System map) When kdump is enabled in system it will take approximately another 40MiB (another initrd with 33MiB) The default partition size of 1 GiB for /boot should suffice for most common use cases. However, increase the size of this partition if you are planning on retaining multiple kernel releases or errata kernels. The /var directory holds content for a number of applications, including the Apache web server, and is used by the DNF package manager to temporarily store downloaded package updates. Make sure that the partition or volume containing /var has at least 5 GiB. The /usr directory holds the majority of software on a typical Red Hat Enterprise Linux installation. The partition or volume containing this directory should therefore be at least 5 GiB for minimal installations, and at least 10 GiB for installations with a graphical environment. If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex because these directories contain boot-critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system may either be unable to boot, or it may hang with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var , not to directories under them. For example, a separate partition for /var/www works without issues. Important Some security policies require the separation of /usr and /var , even though it makes administration more complex. Consider leaving a portion of the space in an LVM volume group unallocated. This unallocated space gives you flexibility if your space requirements change but you do not wish to remove data from other volumes. You can also select the LVM Thin Provisioning device type for the partition to have the unused space handled automatically by the volume. The size of an XFS file system cannot be reduced - if you need to make a partition or volume with this file system smaller, you must back up your data, destroy the file system, and create a new, smaller one in its place. Therefore, if you plan to alter your partitioning layout later, you should use the ext4 file system instead. Use Logical Volume Manager (LVM) if you anticipate expanding your storage by adding more disks or expanding virtual machine disks after the installation. With LVM, you can create physical volumes on the new drives, and then assign them to any volume group and logical volume as you see fit - for example, you can easily expand your system's /home (or any other directory residing on a logical volume). Creating a BIOS Boot partition or an EFI System Partition may be necessary, depending on your system's firmware, boot drive size, and boot drive disk label. Note that you cannot create a BIOS Boot or EFI System Partition in graphical installation if your system does not require one - in that case, they are hidden from the menu. Additional resources How to use dm-crypt on IBM Z, LinuxONE and with the PAES cipher 10.5. Selecting the base environment and additional software Use the Software Selection window to select the software packages that you require. The packages are organized by Base Environment and Additional Software. Base Environment contains predefined packages. You can select only one base environment, for example, Server with GUI (default), Server, Minimal Install, Workstation, Custom operating system, Virtualization Host. The availability is dependent on the installation ISO image that is used as the installation source. Additional Software for Selected Environment contains additional software packages for the base environment. You can select multiple software packages. Use a predefined environment and additional software to customize your system. However, in a standard installation, you cannot select individual packages to install. To view the packages contained in a specific environment, see the repository /repodata/*-comps- repository . architecture .xml file on your installation source media (DVD, CD, USB). The XML file contains details of the packages installed as part of a base environment. Available environments are marked by the <environment> tag, and additional software packages are marked by the <group> tag. If you are unsure about which packages to install, select the Minimal Install base environment. Minimal install installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. After the system finishes installing and you log in for the first time, you can use the DNF package manager to install additional software. For more information about DNF package manager, see the Configuring basic system settings document. Note Use the dnf group list command from any RHEL 9 system to view the list of packages being installed on the system as a part of software selection. For more information, see Configuring basic system settings . If you need to control which packages are installed, you can use a Kickstart file and define the packages in the %packages section. By default, RHEL 9 does not install the TuneD package. You can manually install the TuneD package using the dnf install tuned command. For more information, see the Automatically installing RHEL document. Prerequisites You have configured the installation source. The installation program has downloaded package metadata. The Installation Summary window is open. Procedure From the Installation Summary window, click Software Selection . The Software Selection window opens. From the Base Environment pane, select a base environment. You can select only one base environment, for example, Server with GUI (default), Server, Minimal Install, Workstation, Custom Operating System, Virtualization Host. By default, the Server with GUI base environment is selected. Figure 10.3. Red Hat Enterprise Linux Software Selection Optional: For installations on ARM based systems, select desired Page size from Kernel Options . By default, it selects Kernel with a 4k page size. Warning If you want to use the Kernel with 64k page size, ensure you select Minimal Install under Base Environment to use this option. You can install additional software after you login to the system for the first time post installation using the DNF package manager. From the Additional Software for Selected Environment pane, select one or more options. Click Done to apply the settings and return to graphical installations. Additional resources The 4k and 64k page size Kernel Options 10.6. Optional: Configuring the network and host name Use the Network and Host name window to configure network interfaces. Options that you select here are available both during the installation for tasks such as downloading packages from a remote location, and on the installed system. Follow the steps in this procedure to configure your network and host name. Procedure From the Installation Summary window, click Network and Host Name . From the list in the left-hand pane, select an interface. The details are displayed in the right-hand pane. Toggle the ON/OFF switch to enable or disable the selected interface. You cannot add or remove interfaces manually. Click + to add a virtual network interface, which can be either: Team (deprecated), Bond, Bridge, or VLAN. Click - to remove a virtual interface. Click Configure to change settings such as IP addresses, DNS servers, or routing configuration for an existing interface (both virtual and physical). Type a host name for your system in the Host Name field. The host name can either be a fully qualified domain name (FQDN) in the format hostname.domainname , or a short host name without the domain. Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this system, specify only the short host name. Host names can only contain alphanumeric characters and - or . . Host name should be equal to or less than 64 characters. Host names cannot start or end with - and . . To be compliant with DNS, each part of a FQDN should be equal to or less than 63 characters and the FQDN total length, including dots, should not exceed 255 characters. The value localhost means that no specific static host name for the target system is configured, and the actual host name of the installed system is configured during the processing of the network configuration, for example, by NetworkManager using DHCP or DNS. When using static IP and host name configuration, it depends on the planned system use case whether to use a short name or FQDN. Red Hat Identity Management configures FQDN during provisioning but some 3rd party software products may require a short name. In either case, to ensure availability of both forms in all situations, add an entry for the host in /etc/hosts in the format IP FQDN short-alias . Click Apply to apply the host name to the installer environment. Alternatively, in the Network and Hostname window, you can choose the Wireless option. Click Select network in the right-hand pane to select your wifi connection, enter the password if required, and click Done . Additional resources For more information about network device naming standards, see Configuring and managing networking . 10.6.1. Adding a virtual network interface You can add a virtual network interface. Procedure From the Network & Host name window, click the + button to add a virtual network interface. The Add a device dialog opens. Select one of the four available types of virtual interfaces: Bond : NIC ( Network Interface Controller ) Bonding, a method to bind multiple physical network interfaces together into a single bonded channel. Bridge : Represents NIC Bridging, a method to connect multiple separate networks into one aggregate network. Team : NIC Teaming, a new implementation to aggregate links, designed to provide a small kernel driver to implement the fast handling of packet flows, and various applications to do everything else in user space. NIC teaming is deprecated in Red Hat Enterprise Linux 9. Consider using the network bonding driver as an alternative. For details, see Configuring a network bond . Vlan ( Virtual LAN ): A method to create multiple distinct broadcast domains which are mutually isolated. Select the interface type and click Add . An editing interface dialog box opens, allowing you to edit any available settings for your chosen interface type. For more information, see Editing network interface . Click Save to confirm the virtual interface settings and return to the Network & Host name window. Optional: To change the settings of a virtual interface, select the interface and click Configure . 10.6.2. Editing network interface configuration You can edit the configuration of a typical wired connection used during installation. Configuration of other types of networks is broadly similar, although the specific configuration parameters might be different. Note On 64-bit IBM Z, you cannot add a new connection as the network subchannels need to be grouped and set online beforehand, and this is currently done only in the booting phase. Procedure To configure a network connection manually, select the interface from the Network and Host name window and click Configure . An editing dialog specific to the selected interface opens. The options present depend on the connection type - the available options are slightly different depending on whether the connection type is a physical interface (wired or wireless network interface controller) or a virtual interface (Bond, Bridge, Team (deprecated), or Vlan) that was previously configured in Adding a virtual interface . 10.6.3. Enabling or Disabling the Interface Connection You can enable or disable specific interface connections. Procedure Click the General tab. Select the Connect automatically with priority check box to enable connection by default. Keep the default priority setting at 0 . Optional: Enable or disable all users on the system from connecting to this network by using the All users may connect to this network option. If you disable this option, only root will be able to connect to this network. Important When enabled on a wired connection, the system automatically connects during startup or reboot. On a wireless connection, the interface attempts to connect to any known wireless networks in range. For further information about NetworkManager, including the nm-connection-editor tool, see the Configuring and managing networking document. Click Save to apply the changes and return to the Network and Host name window. It is not possible to only allow a specific user other than root to use this interface, as no other users are created at this point during the installation. If you need a connection for a different user, you must configure it after the installation. 10.6.4. Setting up Static IPv4 or IPv6 Settings By default, both IPv4 and IPv6 are set to automatic configuration depending on current network settings. This means that addresses such as the local IP address, DNS address, and other settings are detected automatically when the interface connects to a network. In many cases, this is sufficient, but you can also provide static configuration in the IPv4 Settings and IPv6 Settings tabs. Complete the following steps to configure IPv4 or IPv6 settings: Procedure To set static network configuration, navigate to one of the IPv Settings tabs and from the Method drop-down menu, select a method other than Automatic , for example, Manual . The Addresses pane is enabled. Optional: In the IPv6 Settings tab, you can also set the method to Ignore to disable IPv6 on this interface. Click Add and enter your address settings. Type the IP addresses in the Additional DNS servers field; it accepts one or more IP addresses of DNS servers, for example, 10.0.0.1,10.0.0.8 . Select the Require IPv X addressing for this connection to complete check box. Selecting this option in the IPv4 Settings or IPv6 Settings tabs allow this connection only if IPv4 or IPv6 was successful. If this option remains disabled for both IPv4 and IPv6, the interface is able to connect if configuration succeeds on either IP protocol. Click Save to apply the changes and return to the Network & Host name window. 10.6.5. Configuring Routes You can control the access of specific connections by configuring routes. Procedure In the IPv4 Settings and IPv6 Settings tabs, click Routes to configure routing settings for a specific IP protocol on an interface. An editing routes dialog specific to the interface opens. Click Add to add a route. Select the Ignore automatically obtained routes check box to configure at least one static route and to disable all routes not specifically configured. Select the Use this connection only for resources on its network check box to prevent the connection from becoming the default route. This option can be selected even if you did not configure any static routes. This route is used only to access certain resources, such as intranet pages that require a local or VPN connection. Another (default) route is used for publicly available resources. Unlike the additional routes configured, this setting is transferred to the installed system. This option is useful only when you configure more than one interface. Click OK to save your settings and return to the editing routes dialog that is specific to the interface. Click Save to apply the settings and return to the Network and Host Name window. 10.7. Optional: Configuring the keyboard layout You can configure the keyboard layout from the Installation Summary screen. Important If you use a layout that cannot accept Latin characters, such as Russian , add the English (United States) layout and configure a keyboard combination to switch between the two layouts. If you select a layout that does not have Latin characters, you might be unable to enter a valid root password and user credentials later in the installation process. This might prevent you from completing the installation. Procedure From the Installation Summary window, click Keyboard . Click + to open the Add a Keyboard Layout window to change to a different layout. Select a layout by browsing the list or use the Search field. Select the required layout and click Add . The new layout appears under the default layout. Click Options to optionally configure a keyboard switch that you can use to cycle between available layouts. The Layout Switching Options window opens. To configure key combinations for switching, select one or more key combinations and click OK to confirm your selection. Optional: When you select a layout, click the Keyboard button to open a new dialog box displaying a visual representation of the selected layout. Click Done to apply the settings and return to graphical installations. 10.8. Optional: Configuring the language support You can change the language settings from the Installation Summary screen. Procedure From the Installation Summary window, click Language Support . The Language Support window opens. The left pane lists the available language groups. If at least one language from a group is configured, a check mark is displayed and the supported language is highlighted. From the left pane, click a group to select additional languages, and from the right pane, select regional options. Repeat this process for all the languages that you want to configure. Optional: Search the language group by typing in the text box, if required. Click Done to apply the settings and return to graphical installations. 10.9. Optional: Configuring the date and time-related settings You can configure the date and time-related settings from the Installation Summary screen. Procedure From the Installation Summary window, click Time & Date . The Time & Date window opens. The list of cities and regions come from the Time Zone Database ( tzdata ) public domain that is maintained by the Internet Assigned Numbers Authority (IANA). Red Hat can not add cities or regions to this database. You can find more information at the IANA official website . From the Region drop-down menu, select a region. Select Etc as your region to configure a time zone relative to Greenwich Mean Time (GMT) without setting your location to a specific region. From the City drop-down menu, select the city, or the city closest to your location in the same time zone. Toggle the Network Time switch to enable or disable network time synchronization using the Network Time Protocol (NTP). Enabling the Network Time switch keeps your system time correct as long as the system can access the internet. By default, one NTP pool is configured. Optional: Use the gear wheel button to the Network Time switch to add a new NTP, or disable or remove the default options. Click Done to apply the settings and return to graphical installations. Optional: Disable the network time synchronization to activate controls at the bottom of the page to set time and date manually. 10.10. Optional: Subscribing the system and activating Red Hat Insights Red Hat Insights is a Software-as-a-Service (SaaS) offering that provides continuous, in-depth analysis of registered Red Hat-based systems to proactively identify threats to security, performance and stability across physical, virtual and cloud environments, and container deployments. By registering your RHEL system in Red Hat Insights, you gain access to predictive analytics, security alerts, and performance optimization tools, enabling you to maintain a secure, efficient, and stable IT environment. You can register to Red Hat by using either your Red Hat account or your activation key details. You can connect your system to Red hat Insights by using the Connect to Red Hat option. Procedure From the Installation Summary screen, under Software , click Connect to Red Hat . Select Account or Activation Key . If you select Account , enter your Red Hat Customer Portal username and password details. If you select Activation Key , enter your organization ID and activation key. You can enter more than one activation key, separated by a comma, as long as the activation keys are registered to your subscription. Select the Set System Purpose check box. If the account has Simple content access mode enabled, setting the system purpose values is still important for accurate reporting of consumption in the subscription services. If your account is in the entitlement mode, system purpose enables the entitlement server to determine and automatically attach the most appropriate subscription to satisfy the intended use of the Red Hat Enterprise Linux 9 system. Select the required Role , SLA , and Usage from the corresponding drop-down lists. The Connect to Red Hat Insights check box is enabled by default. Clear the check box if you do not want to connect to Red Hat Insights. Optional: Expand Options . Select the Use HTTP proxy check box if your network environment only allows external Internet access or access to content servers through an HTTP proxy. Clear the Use HTTP proxy check box if an HTTP proxy is not used. If you are running Satellite Server or performing internal testing, select the Satellite URL and Custom base URL check boxes and enter the required details. Important RHEL 9 is supported only with Satellite 6.11 or later. Check the version prior to registering the system. The Satellite URL field does not require the HTTP protocol, for example nameofhost.com . However, the Custom base URL field requires the HTTP protocol. To change the Custom base URL after registration, you must unregister, provide the new details, and then re-register. Click Register to register the system. When the system is successfully registered and subscriptions are attached, the Connect to Red Hat window displays the attached subscription details. Depending on the amount of subscriptions, the registration and attachment process might take up to a minute to complete. Click Done to return to the Installation Summary window. A Registered message is displayed under Connect to Red Hat . Additional resources About Red Hat Insights 10.11. Optional: Using network-based repositories for the installation You can configure an installation source from either auto-detected installation media, Red Hat CDN, or the network. When the Installation Summary window first opens, the installation program attempts to configure an installation source based on the type of media that was used to boot the system. The full Red Hat Enterprise Linux Server DVD configures the source as local media. Prerequisites You have downloaded the full installation DVD ISO or minimal installation Boot ISO image from the Product Downloads page. You have created bootable installation media. The Installation Summary window is open. Procedure From the Installation Summary window, click Installation Source . The Installation Source window opens. Review the Auto-detected installation media section to verify the details. This option is selected by default if you started the installation program from media containing an installation source, for example, a DVD. Click Verify to check the media integrity. Review the Additional repositories section and note that the AppStream check box is selected by default. The BaseOS and AppStream repositories are installed as part of the full installation image. Do not disable the AppStream repository check box if you want a full Red Hat Enterprise Linux 9 installation. Optional: Select the Red Hat CDN option to register your system, attach RHEL subscriptions, and install RHEL from the Red Hat Content Delivery Network (CDN). Optional: Select the On the network option to download and install packages from a network location instead of local media. This option is available only when a network connection is active. See Configuring network and host name options for information about how to configure network connections in the GUI. Note If you do not want to download and install additional repositories from a network location, proceed to Configuring software selection . Select the On the network drop-down menu to specify the protocol for downloading packages. This setting depends on the server that you want to use. Type the server address (without the protocol) into the address field. If you choose NFS, a second input field opens where you can specify custom NFS mount options . This field accepts options listed in the nfs(5) man page on your system. When selecting an NFS installation source, specify the address with a colon ( : ) character separating the host name from the path. For example, server.example.com:/path/to/directory . The following steps are optional and are only required if you use a proxy for network access. Click Proxy setup to configure a proxy for an HTTP or HTTPS source. Select the Enable HTTP proxy check box and type the URL into the Proxy Host field. Select the Use Authentication check box if the proxy server requires authentication. Type in your user name and password. Click OK to finish the configuration and exit the Proxy Setup... dialog box. Note If your HTTP or HTTPS URL refers to a repository mirror, select the required option from the URL type drop-down list. All environments and additional software packages are available for selection when you finish configuring the sources. Click + to add a repository. Click - to delete a repository. Click the arrow icon to revert the current entries to the setting when you opened the Installation Source window. To activate or deactivate a repository, click the check box in the Enabled column for each entry in the list. You can name and configure your additional repository in the same way as the primary repository on the network. Click Done to apply the settings and return to the Installation Summary window. 10.12. Optional: Configuring Kdump kernel crash-dumping mechanism Kdump is a kernel crash-dumping mechanism. In the event of a system crash, Kdump captures the contents of the system memory at the moment of failure. This captured memory can be analyzed to find the cause of the crash. If Kdump is enabled, it must have a small portion of the system's memory (RAM) reserved to itself. This reserved memory is not accessible to the main kernel. Procedure From the Installation Summary window, click Kdump . The Kdump window opens. Select the Enable kdump check box. Select either the Automatic or Manual memory reservation setting. If you select Manual , enter the amount of memory (in megabytes) that you want to reserve in the Memory to be reserved field using the + and - buttons. The Usable System Memory readout below the reservation input field shows how much memory is accessible to your main system after reserving the amount of RAM that you select. Click Done to apply the settings and return to graphical installations. The amount of memory that you reserve is determined by your system architecture (AMD64 and Intel 64 have different requirements than IBM Power) as well as the total amount of system memory. In most cases, automatic reservation is satisfactory. Additional settings, such as the location where kernel crash dumps will be saved, can only be configured after the installation using either the system-config-kdump graphical interface, or manually in the /etc/kdump.conf configuration file. 10.13. Optional: Selecting a security profile You can apply security policy during your Red Hat Enterprise Linux 9 installation and configure it to use on your system before the first boot. 10.13.1. About security policy The Red Hat Enterprise Linux includes OpenSCAP suite to enable automated configuration of the system in alignment with a particular security policy. The policy is implemented using the Security Content Automation Protocol (SCAP) standard. The packages are available in the AppStream repository. However, by default, the installation and post-installation process does not enforce any policies and therefore does not involve any checks unless specifically configured. Applying a security policy is not a mandatory feature of the installation program. If you apply a security policy to the system, it is installed using restrictions defined in the profile that you selected. The openscap-scanner and scap-security-guide packages are added to your package selection, providing a preinstalled tool for compliance and vulnerability scanning. When you select a security policy, the Anaconda GUI installer requires the configuration to adhere to the policy's requirements. There might be conflicting package selections, as well as separate partitions defined. Only after all the requirements are met, you can start the installation. At the end of the installation process, the selected OpenSCAP security policy automatically hardens the system and scans it to verify compliance, saving the scan results to the /root/openscap_data directory on the installed system. By default, the installer uses the content of the scap-security-guide package bundled in the installation image. You can also load external content from an HTTP, HTTPS, or FTP server. 10.13.2. Configuring a security profile You can configure a security policy from the Installation Summary window. Prerequisite The Installation Summary window is open. Procedure From the Installation Summary window, click Security Profile . The Security Profile window opens. To enable security policies on the system, toggle the Apply security policy switch to ON . Select one of the profiles listed in the top pane. Click Select profile . Profile changes that you must apply before installation appear in the bottom pane. Click Change content to use a custom profile. A separate window opens allowing you to enter a URL for valid security content. Click Fetch to retrieve the URL. You can load custom profiles from an HTTP , HTTPS , or FTP server. Use the full address of the content including the protocol, such as http:// . A network connection must be active before you can load a custom profile. The installation program detects the content type automatically. Click Use SCAP Security Guide to return to the Security Profile window. Click Done to apply the settings and return to the Installation Summary window. 10.13.3. Profiles not compatible with Server with GUI Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. Therefore, do not select Server with GUI when installing systems compliant with one of the following profiles: Table 10.2. Profiles not compatible with Server with GUI Profile name Profile ID Justification Notes [DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. [DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. DISA STIG for Red Hat Enterprise Linux 9 xccdf_org.ssgproject.content_profile_ stig Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. To install a RHEL system as a Server with GUI aligned with DISA STIG, you can use the DISA STIG with GUI profile BZ#1648162 10.13.4. Deploying baseline-compliant RHEL systems using Kickstart You can deploy RHEL systems that are aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP). Prerequisites The scap-security-guide package is installed on your RHEL 9 system. Procedure Open the /usr/share/scap-security-guide/kickstart/ssg-rhel9-ospp-ks.cfg Kickstart file in an editor of your choice. Update the partitioning scheme to fit your configuration requirements. For OSPP compliance, the separate partitions for /boot , /home , /var , /tmp , /var/log , /var/tmp , and /var/log/audit must be preserved, and you can only change the size of the partitions. Start a Kickstart installation as described in Performing an automated installation using Kickstart . Important Passwords in Kickstart files are not checked for OSPP requirements. Verification To check the current status of the system after installation is complete, reboot the system and start a new scan: Additional resources OSCAP Anaconda Add-on Kickstart commands and options reference: %addon org_fedora_oscap 10.13.5. Additional resources scap-security-guide(8) - The manual page for the scap-security-guide project contains information about SCAP security profiles, including examples on how to utilize the provided benchmarks using the OpenSCAP utility. Red Hat Enterprise Linux security compliance information is available in the Security hardening document. | [
"oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/customizing-the-system-in-the-installer_rhel-installer |
Chapter 8. Replacing DistributedComputeHCI nodes | Chapter 8. Replacing DistributedComputeHCI nodes During hardware maintenance you may need to scale down, scale up, or replace a DistributedComputeHCI node at an edge site. To replace a DistributedComputeHCI node, remove services from the node you are replacing, scale the number of nodes down, and then follow the procedures for scaling those nodes back up. 8.1. Removing the Compute (nova) service Disable the nova-compute service and delete the relevant network agent. Procedure Delete the node that you are removing from the stack: `openstack overcloud node delete --stack <dcn2> \ <computehci2-1> Delete the network agent on the node you are removing: (central) [stack@site-undercloud-0 ~]USD openstack network agent list | grep dcn2 ... | 17726d1a-e9d1-4e57-b40d-e742be5d073c | Open vSwitch agent | dcn2-computehci2-1.redhat.local | None | XXX | UP | neutron-openvswitch-agent | ... (central) [stack@site-undercloud-0 ~]USD openstack network agent delete 17726d1a-e9d1-4e57-b40d-e742be5d073c 8.2. Removing Red Hat Ceph Storage services To remove the Red Hat Ceph services mon , mgr and osd , you must disable and remove ceph-osd from the cluster services on the node you are removing, then stop and disable the mon , mgr and osd services. Procedure Use SSH to connect to the DistributedComputeHCI node you want to remove and log in as the root user. USD ssh heat-admin@<dcn-computehci-node> USD sudo su - # Identify the OSDs associated with the DistributedComputeHCI node you are removing: [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dnc2-computehci2-1 ceph osd tree -c /etc/ceph/dcn2.conf ... -3 0.24399 host dcn2-computehci2-1 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 ... Disable the OSDs on the relevant Ceph node: [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd out 1 7 11 15 18 -c /etc/ceph/dcn2.conf marked out osd.1. marked out osd.7. marked out osd.11. marked out osd.15. marked out osd.18. Wait for Ceph osd rebalancing to finish. Monitor progress with the following command: [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph -w -c /etc/ceph/dcn2.conf ... mon.dcn2-computehci2-2 has auth_allow_insecure_global_id_reclaim set to true The rebalancing is complete when you see that auth_allow_insecure_global_id_reclaim is set to true . Stop and disable the OSDs: [root@dcn2-computehci2-1 ~]# systemctl stop ceph-osd@1 [root@dcn2-computehci2-1 ~]# systemctl stop ceph-osd@7 [root@dcn2-computehci2-1 ~]# systemctl stop ceph-osd@11 [root@dcn2-computehci2-1 ~]# systemctl stop ceph-osd@15 [root@dcn2-computehci2-1 ~]# systemctl stop ceph-osd@18 [root@dcn2-computehci2-1 ~]# systemctl disable ceph-osd@1 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. [root@dcn2-computehci2-1 ~]# systemctl disable ceph-osd@7 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. [root@dcn2-computehci2-1 ~]# systemctl disable ceph-osd@11 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. [root@dcn2-computehci2-1 ~]# systemctl disable ceph-osd@15 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. [root@dcn2-computehci2-1 ~]# systemctl disable ceph-osd@18 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. Remove the OSDs from the CRUSH map: [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.1 -c /etc/ceph/dcn2.conf removed item id 1 name 'osd.1' from crush map [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.7 -c /etc/ceph/dcn2.conf removed item id 7 name 'osd.7' from crush map [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.11 -c /etc/ceph/dcn2.conf removed item id 11 name 'osd.11' from crush map [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.15 -c /etc/ceph/dcn2.conf removed item id 15 name 'osd.15' from crush map [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.18 -c /etc/ceph/dcn2.conf removed item id 18 name 'osd.18' from crush map Remove the OSD auth keys: [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.1 -c /etc/ceph/dcn2.conf updated [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.7 -c /etc/ceph/dcn2.conf updated [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.11 -c /etc/ceph/dcn2.conf updated [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.15 -c /etc/ceph/dcn2.conf updated [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.18 -c /etc/ceph/dcn2.conf updated Remove the OSDs from the cluster: [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 1 -c /etc/ceph/dcn2.conf removed osd.1 [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 7 -c /etc/ceph/dcn2.conf removed osd.7 [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 11 -c /etc/ceph/dcn2.conf removed osd.11 [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 15 -c /etc/ceph/dcn2.conf removed osd.15 [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 18 -c /etc/ceph/dcn2.conf removed osd.18 Remove the DistributedComputeHCI node from the CRUSH map: [root@dcn2-computehci2-1 ~]# podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush rm dcn2-computehci2-1 -c /etc/ceph/dcn2.conf removed item id -3 name 'dcn2-computehci2-1' from crush map Stop and disable the mon service: [root@dcn2-computehci2-1 ~]# systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [email protected] loaded active running Ceph Manager [email protected] loaded active running Ceph Monitor [root@dcn2-computehci2-1 ~]# systemctl stop ceph-mon@dcn2-computehci2-1 [root@dcn2-computehci2-1 ~]# systemctl disable ceph-mon@dcn2-computehci2-1 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. Use SSH to connect to another node in the same cluster and remove the monitor from the cluster. Note the v1 and v2 entries in the output: [root@dcn2-computehci2-0 ~]# podman exec ceph-mon-dcn2-computehci2-0 ceph mon remove dcn2-computehci2-1 -c /etc/ceph/dcn2.conf removing mon.dcn2-computehci2-1 at [v2:172.23.3.153:3300/0,v1:172.23.3.153:6789/0], there will be 2 monitors On all dcn2 nodes, remove the v1 and v2 monitor entries in /etc/ceph/dcn2.conf that were output in the step, and the node name from the 'mon initial members': Before mon host = [v2:172.23.3.150:3300,v1:172.23.3.150:6789],*[v2:172.23.3.153:3300,v1:172.23.3.153:6789]*,[v2:172.23.3.124:3300,v1:172.23.3.124:6789] + mon initial members = dcn2-computehci2-0,*dcn2-computehci2-1*,dcn2-computehci2-2 After mon host = [v2:172.23.3.150:3300,v1:172.23.3.150:6789],[v2:172.23.3.124:3300,v1:172.23.3.124:6789] + mon initial members = dcn2-computehci2-0,dcn2-computehci2-2 Stop and disable the mgr service: [root@dcn2-computehci2-1 ~]# systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [email protected] loaded active running Ceph Manager [root@dcn2-computehci2-1 ~]# systemctl stop ceph-mgr@dcn2-computehci2-1 [root@dcn2-computehci2-1 ~]# systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [root@dcn2-computehci2-1 ~]# systemctl disable ceph-mgr@dcn2-computehci2-1 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. Verify that the mgr service for the node is removed from the cluster. [root@dcn2-computehci2-0 ~]# podman exec ceph-mon-dcn2-computehci2-0 ceph -s -c /etc/ceph/dcn2.conf cluster: id: b9b53581-d590-41ac-8463-2f50aa985001 health: HEALTH_WARN 3 pools have too many placement groups mons are allowing insecure global_id reclaim services: mon: 2 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0 (age 2h) mgr: dcn2-computehci2-2(active, since 20h), standbys: dcn2-computehci2-0 1 osd: 15 osds: 15 up (since 3h), 15 in (since 3h) data: pools: 3 pools, 384 pgs objects: 32 objects, 88 MiB usage: 16 GiB used, 734 GiB / 750 GiB avail pgs: 384 active+clean 1 The node that the mgr service is removed from is no longer listed when the mgr service is successfully removed. 8.3. Removing the Image service (glance) services Remove image services from a node when you remove it from service. Procedure To disable the Image service services, disable them using systemctl on the node you are removing: [root@dcn2-computehci2-1 ~]# systemctl stop tripleo_glance_api.service [root@dcn2-computehci2-1 ~]# systemctl stop tripleo_glance_api_tls_proxy.service [root@dcn2-computehci2-1 ~]# systemctl disable tripleo_glance_api.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api.service. [root@dcn2-computehci2-1 ~]# systemctl disable tripleo_glance_api_tls_proxy.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api_tls_proxy.service. 8.4. Removing the Block Storage (cinder) services You must remove the cinder-volume and etcd services from the DistributedComputeHCI node when you remove it from service. Procedure Identify and disable the cinder-volume service on the node you are removing: (central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume | cinder-volume | dcn2-computehci2-1@tripleo_ceph | az-dcn2 | enabled | up | 2022-03-23T17:41:43.000000 | (central) [stack@site-undercloud-0 ~]USD openstack volume service set --disable dcn2-computehci2-1@tripleo_ceph cinder-volume Log on to a different DistributedComputeHCI node in the stack: USD ssh heat-admin@dcn2-computehci2-0 Remove the cinder-volume service associated with the node that you are removing: [root@dcn2-computehci2-0 ~]# podman exec -it cinder_volume cinder-manage service remove cinder-volume dcn2-computehci2-1@tripleo_ceph Service cinder-volume on host dcn2-computehci2-1@tripleo_ceph removed. Stop and disable the tripleo_cinder_volume service on the node that you are removing: 8.5. Delete the DistributedComputeHCI node To preserve the environment, the openstack overcloud node delete command must include all relevant templates and environment files: Procedure Delete the DistributedComputeHCI node USD openstack overcloud node delete / --stack <stack-name> / <node_UUID> If you are going to reuse the node, use ironic to clean the disk. This is required if the node will host Ceph OSDs: openstack baremetal node manage USDUUID openstack baremetal node clean USDUUID --clean-steps '[{"interface":"deploy", "step": "erase_devices_metadata"}]' openstack baremetal provide USDUUID 8.6. Replacing a removed DistributedComputeHCI node 8.6.1. Replacing a removed DistributedComputeHCI node To add new HCI nodes to your DCN deployment, you must redeploy the edge stack with the additional node, perform a ceph export of that stack, and then perform a stack update for the central location. A stack update of the central location adds configurations specific to edge-sites. Prerequisites The node counts are correct in the nodes_data.yaml file of the stack that you want to replace the node in or add a new node to. Procedure You must set the EtcdIntialClusterState parameter to existing in one of the templates called by your deploy script: Redeploy using the deployment script specific to the stack: Export the Red Hat Ceph Storage data from the stack: Replace dcn_ceph_external.yaml with the newly generated dcn2_scale_up_ceph_external.yaml in the deploy script for the central location. Perform a stack update at central: 8.7. Verify the functionality of a replaced DistributedComputeHCI node Ensure the value of the status field is enabled , and that the value of the State field is up : (central) [stack@site-undercloud-0 ~]USD openstack compute service list -c Binary -c Host -c Zone -c Status -c State +----------------+-----------------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +----------------+-----------------------------------------+------------+---------+-------+ ... | nova-compute | dcn1-compute1-0.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn1-compute1-1.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn2-computehciscaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computescaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-2.redhat.local | az-dcn2 | enabled | up | ... Ensure that all network agents are in the up state: (central) [stack@site-undercloud-0 ~]USD openstack network agent list -c "Agent Type" -c Host -c Alive -c State +--------------------+-----------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +--------------------+-----------------------------------------+-------+-------+ | DHCP agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-1.redhat.local | :-) | UP | | DHCP agent | dcn3-compute3-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-2.redhat.local | :-) | UP | | Open vSwitch agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-1.redhat.local | :-) | UP | | L3 agent | central-controller0-2.redhat.local | :-) | UP | | Metadata agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computescaleout2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-5.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-2.redhat.local | :-) | UP | | DHCP agent | central-controller0-0.redhat.local | :-) | UP | | Open vSwitch agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-0.redhat.local | :-) | UP | ... Verify the status of the Ceph Cluster: Use SSH to connect to the new DistributedComputeHCI node and check the status of the Ceph cluster: [root@dcn2-computehci2-5 ~]# podman exec -it ceph-mon-dcn2-computehci2-5 \ ceph -s -c /etc/ceph/dcn2.conf Verify that both the ceph mon and ceph mgr services exist for the new node: services: mon: 3 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0,dcn2-computehci2-5 (age 3d) mgr: dcn2-computehci2-2(active, since 3d), standbys: dcn2-computehci2-0, dcn2-computehci2-5 osd: 20 osds: 20 up (since 3d), 20 in (since 3d) Verify the status of the ceph osds with 'ceph osd tree'. Ensure all osds for our new node are in STATUS up: [root@dcn2-computehci2-5 ~]# podman exec -it ceph-mon-dcn2-computehci2-5 ceph osd tree -c /etc/ceph/dcn2.conf ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.97595 root default -5 0.24399 host dcn2-computehci2-0 0 hdd 0.04880 osd.0 up 1.00000 1.00000 4 hdd 0.04880 osd.4 up 1.00000 1.00000 8 hdd 0.04880 osd.8 up 1.00000 1.00000 13 hdd 0.04880 osd.13 up 1.00000 1.00000 17 hdd 0.04880 osd.17 up 1.00000 1.00000 -9 0.24399 host dcn2-computehci2-2 3 hdd 0.04880 osd.3 up 1.00000 1.00000 5 hdd 0.04880 osd.5 up 1.00000 1.00000 10 hdd 0.04880 osd.10 up 1.00000 1.00000 14 hdd 0.04880 osd.14 up 1.00000 1.00000 19 hdd 0.04880 osd.19 up 1.00000 1.00000 -3 0.24399 host dcn2-computehci2-5 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 -7 0.24399 host dcn2-computehciscaleout2-0 2 hdd 0.04880 osd.2 up 1.00000 1.00000 6 hdd 0.04880 osd.6 up 1.00000 1.00000 9 hdd 0.04880 osd.9 up 1.00000 1.00000 12 hdd 0.04880 osd.12 up 1.00000 1.00000 16 hdd 0.04880 osd.16 up 1.00000 1.00000 Verify the cinder-volume service for the new DistributedComputeHCI node is in Status 'enabled' and in State 'up': (central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume -c Binary -c Host -c Zone -c Status -c State +---------------+---------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +---------------+---------------------------------+------------+---------+-------+ | cinder-volume | hostgroup@tripleo_ceph | az-central | enabled | up | | cinder-volume | dcn1-compute1-1@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn1-compute1-0@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn2-computehci2-0@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-2@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-5@tripleo_ceph | az-dcn2 | enabled | up | +---------------+---------------------------------+------------+---------+-------+ Note If the State of the cinder-volume service is down , then the service has not been started on the node. Use ssh to connect to the new DistributedComputeHCI node and check the status of the Glance services with 'systemctl': [root@dcn2-computehci2-5 ~]# systemctl --type service | grep glance tripleo_glance_api.service loaded active running glance_api container tripleo_glance_api_healthcheck.service loaded activating start start glance_api healthcheck tripleo_glance_api_tls_proxy.service loaded active running glance_api_tls_proxy container 8.8. Troubleshooting DistributedComputeHCI state down If the replacement node was deployed without the EtcdInitialClusterState parameter value set to existing , then the cinder-volume service of the replaced node shows down when you run openstack volume service list . Procedure Log onto the replacement node and check logs for the etcd service. Check that the logs show the etcd service is reporting a cluster ID mismatch in the /var/log/containers/stdouts/etcd.log log file: Set the EtcdInitialClusterState parameter to the value of existing in your deployment templates and rerun the deployment script. Use SSH to connect to the replacement node and run the following commands as root: Recheck the /var/log/containers/stdouts/etcd.log log file to verify that the node successfully joined the cluster: Check the state of the cinder-volume service, and confirm it reads up on the replacement node when you run openstack volume service list . | [
"`openstack overcloud node delete --stack <dcn2> <computehci2-1>",
"(central) [stack@site-undercloud-0 ~]USD openstack network agent list | grep dcn2 ... | 17726d1a-e9d1-4e57-b40d-e742be5d073c | Open vSwitch agent | dcn2-computehci2-1.redhat.local | None | XXX | UP | neutron-openvswitch-agent | ... (central) [stack@site-undercloud-0 ~]USD openstack network agent delete 17726d1a-e9d1-4e57-b40d-e742be5d073c",
"ssh heat-admin@<dcn-computehci-node> sudo su - #",
"podman exec ceph-mon-dnc2-computehci2-1 ceph osd tree -c /etc/ceph/dcn2.conf ... -3 0.24399 host dcn2-computehci2-1 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 ...",
"podman exec ceph-mon-dcn2-computehci2-1 ceph osd out 1 7 11 15 18 -c /etc/ceph/dcn2.conf marked out osd.1. marked out osd.7. marked out osd.11. marked out osd.15. marked out osd.18.",
"podman exec ceph-mon-dcn2-computehci2-1 ceph -w -c /etc/ceph/dcn2.conf ... mon.dcn2-computehci2-2 has auth_allow_insecure_global_id_reclaim set to true",
"systemctl stop ceph-osd@1 systemctl stop ceph-osd@7 systemctl stop ceph-osd@11 systemctl stop ceph-osd@15 systemctl stop ceph-osd@18 systemctl disable ceph-osd@1 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. systemctl disable ceph-osd@7 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. systemctl disable ceph-osd@11 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. systemctl disable ceph-osd@15 Removed /etc/systemd/system/multi-user.target.wants/[email protected]. systemctl disable ceph-osd@18 Removed /etc/systemd/system/multi-user.target.wants/[email protected].",
"podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.1 -c /etc/ceph/dcn2.conf removed item id 1 name 'osd.1' from crush map podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.7 -c /etc/ceph/dcn2.conf removed item id 7 name 'osd.7' from crush map podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.11 -c /etc/ceph/dcn2.conf removed item id 11 name 'osd.11' from crush map podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.15 -c /etc/ceph/dcn2.conf removed item id 15 name 'osd.15' from crush map podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush remove osd.18 -c /etc/ceph/dcn2.conf removed item id 18 name 'osd.18' from crush map",
"podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.1 -c /etc/ceph/dcn2.conf updated podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.7 -c /etc/ceph/dcn2.conf updated podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.11 -c /etc/ceph/dcn2.conf updated podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.15 -c /etc/ceph/dcn2.conf updated podman exec ceph-mon-dcn2-computehci2-1 ceph auth del osd.18 -c /etc/ceph/dcn2.conf updated",
"podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 1 -c /etc/ceph/dcn2.conf removed osd.1 podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 7 -c /etc/ceph/dcn2.conf removed osd.7 podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 11 -c /etc/ceph/dcn2.conf removed osd.11 podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 15 -c /etc/ceph/dcn2.conf removed osd.15 podman exec ceph-mon-dcn2-computehci2-1 ceph osd rm 18 -c /etc/ceph/dcn2.conf removed osd.18",
"podman exec ceph-mon-dcn2-computehci2-1 ceph osd crush rm dcn2-computehci2-1 -c /etc/ceph/dcn2.conf removed item id -3 name 'dcn2-computehci2-1' from crush map",
"systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [email protected] loaded active running Ceph Manager [email protected] loaded active running Ceph Monitor systemctl stop ceph-mon@dcn2-computehci2-1 systemctl disable ceph-mon@dcn2-computehci2-1 Removed /etc/systemd/system/multi-user.target.wants/[email protected].",
"podman exec ceph-mon-dcn2-computehci2-0 ceph mon remove dcn2-computehci2-1 -c /etc/ceph/dcn2.conf removing mon.dcn2-computehci2-1 at [v2:172.23.3.153:3300/0,v1:172.23.3.153:6789/0], there will be 2 monitors",
"systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [email protected] loaded active running Ceph Manager systemctl stop ceph-mgr@dcn2-computehci2-1 systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector systemctl disable ceph-mgr@dcn2-computehci2-1 Removed /etc/systemd/system/multi-user.target.wants/[email protected].",
"podman exec ceph-mon-dcn2-computehci2-0 ceph -s -c /etc/ceph/dcn2.conf cluster: id: b9b53581-d590-41ac-8463-2f50aa985001 health: HEALTH_WARN 3 pools have too many placement groups mons are allowing insecure global_id reclaim services: mon: 2 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0 (age 2h) mgr: dcn2-computehci2-2(active, since 20h), standbys: dcn2-computehci2-0 1 osd: 15 osds: 15 up (since 3h), 15 in (since 3h) data: pools: 3 pools, 384 pgs objects: 32 objects, 88 MiB usage: 16 GiB used, 734 GiB / 750 GiB avail pgs: 384 active+clean",
"systemctl stop tripleo_glance_api.service systemctl stop tripleo_glance_api_tls_proxy.service systemctl disable tripleo_glance_api.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api.service. systemctl disable tripleo_glance_api_tls_proxy.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api_tls_proxy.service.",
"(central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume | cinder-volume | dcn2-computehci2-1@tripleo_ceph | az-dcn2 | enabled | up | 2022-03-23T17:41:43.000000 | (central) [stack@site-undercloud-0 ~]USD openstack volume service set --disable dcn2-computehci2-1@tripleo_ceph cinder-volume",
"ssh heat-admin@dcn2-computehci2-0",
"podman exec -it cinder_volume cinder-manage service remove cinder-volume dcn2-computehci2-1@tripleo_ceph Service cinder-volume on host dcn2-computehci2-1@tripleo_ceph removed.",
"systemctl stop tripleo_cinder_volume.service systemctl disable tripleo_cinder_volume.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_cinder_volume.service",
"openstack overcloud node delete / --stack <stack-name> / <node_UUID>",
"openstack baremetal node manage USDUUID openstack baremetal node clean USDUUID --clean-steps '[{\"interface\":\"deploy\", \"step\": \"erase_devices_metadata\"}]' openstack baremetal provide USDUUID",
"parameter_defaults: EtcdInitialClusterState: existing",
"(undercloud) [stack@site-undercloud-0 ~]USD ./overcloud_deploy_dcn2.sh ... Overcloud Deployed without error",
"(undercloud) [stack@site-undercloud-0 ~]USD sudo -E openstack overcloud export ceph --stack dcn1,dcn2 --config-download-dir /var/lib/mistral --output-file ~/central/dcn2_scale_up_ceph_external.yaml",
"(undercloud) [stack@site-undercloud-0 ~]USD ./overcloud_deploy.sh Overcloud Deployed without error",
"(central) [stack@site-undercloud-0 ~]USD openstack compute service list -c Binary -c Host -c Zone -c Status -c State +----------------+-----------------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +----------------+-----------------------------------------+------------+---------+-------+ | nova-compute | dcn1-compute1-0.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn1-compute1-1.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn2-computehciscaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computescaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-2.redhat.local | az-dcn2 | enabled | up |",
"(central) [stack@site-undercloud-0 ~]USD openstack network agent list -c \"Agent Type\" -c Host -c Alive -c State +--------------------+-----------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +--------------------+-----------------------------------------+-------+-------+ | DHCP agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-1.redhat.local | :-) | UP | | DHCP agent | dcn3-compute3-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-2.redhat.local | :-) | UP | | Open vSwitch agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-1.redhat.local | :-) | UP | | L3 agent | central-controller0-2.redhat.local | :-) | UP | | Metadata agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computescaleout2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-5.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-2.redhat.local | :-) | UP | | DHCP agent | central-controller0-0.redhat.local | :-) | UP | | Open vSwitch agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-0.redhat.local | :-) | UP |",
"podman exec -it ceph-mon-dcn2-computehci2-5 ceph -s -c /etc/ceph/dcn2.conf",
"services: mon: 3 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0,dcn2-computehci2-5 (age 3d) mgr: dcn2-computehci2-2(active, since 3d), standbys: dcn2-computehci2-0, dcn2-computehci2-5 osd: 20 osds: 20 up (since 3d), 20 in (since 3d)",
"podman exec -it ceph-mon-dcn2-computehci2-5 ceph osd tree -c /etc/ceph/dcn2.conf ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.97595 root default -5 0.24399 host dcn2-computehci2-0 0 hdd 0.04880 osd.0 up 1.00000 1.00000 4 hdd 0.04880 osd.4 up 1.00000 1.00000 8 hdd 0.04880 osd.8 up 1.00000 1.00000 13 hdd 0.04880 osd.13 up 1.00000 1.00000 17 hdd 0.04880 osd.17 up 1.00000 1.00000 -9 0.24399 host dcn2-computehci2-2 3 hdd 0.04880 osd.3 up 1.00000 1.00000 5 hdd 0.04880 osd.5 up 1.00000 1.00000 10 hdd 0.04880 osd.10 up 1.00000 1.00000 14 hdd 0.04880 osd.14 up 1.00000 1.00000 19 hdd 0.04880 osd.19 up 1.00000 1.00000 -3 0.24399 host dcn2-computehci2-5 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 -7 0.24399 host dcn2-computehciscaleout2-0 2 hdd 0.04880 osd.2 up 1.00000 1.00000 6 hdd 0.04880 osd.6 up 1.00000 1.00000 9 hdd 0.04880 osd.9 up 1.00000 1.00000 12 hdd 0.04880 osd.12 up 1.00000 1.00000 16 hdd 0.04880 osd.16 up 1.00000 1.00000",
"(central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume -c Binary -c Host -c Zone -c Status -c State +---------------+---------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +---------------+---------------------------------+------------+---------+-------+ | cinder-volume | hostgroup@tripleo_ceph | az-central | enabled | up | | cinder-volume | dcn1-compute1-1@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn1-compute1-0@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn2-computehci2-0@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-2@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-5@tripleo_ceph | az-dcn2 | enabled | up | +---------------+---------------------------------+------------+---------+-------+",
"systemctl --type service | grep glance tripleo_glance_api.service loaded active running glance_api container tripleo_glance_api_healthcheck.service loaded activating start start glance_api healthcheck tripleo_glance_api_tls_proxy.service loaded active running glance_api_tls_proxy container",
"2022-04-06T18:00:11.834104130+00:00 stderr F 2022-04-06 18:00:11.834045 E | rafthttp: request cluster ID mismatch (got 654f4cf0e2cfb9fd want 918b459b36fe2c0c)",
"systemctl stop tripleo_etcd rm -rf /var/lib/etcd/* systemctl start tripleo_etcd",
"2022-04-06T18:24:22.130059875+00:00 stderr F 2022-04-06 18:24:22.129395 I | etcdserver/membership: added member 96f61470cd1839e5 [https://dcn2-computehci2-4.internalapi.redhat.local:2380] to cluster 654f4cf0e2cfb9fd"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/distributed_compute_node_and_storage_deployment/assembly_replacing-dcnhci-nodes |
Chapter 5. Technology preview | Chapter 5. Technology preview This section lists features that are in Technology Preview in Red Hat Decision Manager 7.13. Business Central includes an experimental features administration page that is disabled by default. To enable this page, set the value of the appformer.experimental.features property to true . Important These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . 5.1. Deploying a high-availability authoring environment on Red Hat OpenShift Container Platform 4.x You can deploy a high-availability Red Hat Decision Manager authoring environment on Red Hat OpenShift Container Platform 4.x using the operator. 5.2. OpenShift operator installer wizard An installer wizard is provided in the Red Hat OpenShift Container Platform operator for Red Hat Decision Manager. You can use the wizard to deploy a Red Hat Decision Manager environment on Red Hat OpenShift Container Platform with the operator. 5.3. Authoring perspective customization You can perform the following tasks to customize the Business Central authoring perspective: Open a Business Central project directly using an URL path parameter, without going through a list of spaces and projects. Hide or show the project toolbar, Metrics tab, and Change Request tab according to your requirements. Enhance the pagination. Customize the number of assets present on the project screen. 5.4. Red Hat build of OptaPlanner new constraint collectors In order to provide a full implementation of some pre-existing OptaPlanner examples using the Constraint Streams API, the standard library of constraint collectors has been extended to include the following constraint collectors: One constraint collector takes point values such as dates, orders them on a number line, and makes groups of consecutive values with breaks between the groups available downstream. Another constraint collector takes interval values such as shifts, creates clusters of consecutive and possibly overlapping values with breaks between clusters, and makes the clusters available downstream. These new collectors are in technology preview. Their interfaces, names, and functionality are subject to change. They have been placed in an experimental package outside of the public API to encourage public feedback before they become an officially supported part of the OptaPlanner public API. 5.5. Red Hat build of Kogito and Kafka integration Red Hat build of Kogito decision microservices integration with managed Kafka by using the org.kie.kogito:kogito-addons-{quarkus|springboot}-events-decisions event-driven add-on is now available as technology preview. On Red Hat build of Quarkus, you can add the io.quarkus:quarkus-kubernetes-service-binding dependency to the application to handle the service binding created by the managed Kafka. On Spring boot, you must add the mappings field to the created service binding which must contain the required environment variables needed by the application. Another solution is to use the custom configuration maps available in the Red Hat build of Kogito operator. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/rn-tech-preview-con |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.