title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Getting Red Hat Quay release notifications | Chapter 3. Getting Red Hat Quay release notifications To keep up with the latest Red Hat Quay releases and other changes related to Red Hat Quay, you can sign up for update notifications on the Red Hat Customer Portal . After signing up for notifications, you will receive notifications letting you know when there is new a Red Hat Quay version, updated documentation, or other Red Hat Quay news. Log into the Red Hat Customer Portal with your Red Hat customer account credentials. Select your user name (upper-right corner) to see Red Hat Account and Customer Portal selections: Select Notifications. Your profile activity page appears. Select the Notifications tab. Select Manage Notifications. Select Follow, then choose Products from the drop-down box. From the drop-down box to the Products, search for and select Red Hat Quay: Select the SAVE NOTIFICATION button. Going forward, you will receive notifications when there are changes to the Red Hat Quay product, such as a new release. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/release-notifications |
Chapter 2. ClusterRoleBinding [authorization.openshift.io/v1] | Chapter 2. ClusterRoleBinding [authorization.openshift.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference any ClusterRole in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. ClusterRoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required subjects roleRef 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources groupNames array (string) GroupNames holds all the groups directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata roleRef ObjectReference RoleRef can only reference the current namespace and the global namespace. If the ClusterRoleRef cannot be resolved, the Authorizer must return an error. Since Policy is a singleton, this is sufficient knowledge to locate a role. subjects array (ObjectReference) Subjects hold object references to authorize with this rule. This field is ignored if UserNames or GroupNames are specified to support legacy clients and servers. Thus newer clients that do not need to support backwards compatibility should send only fully qualified Subjects and should omit the UserNames and GroupNames fields. Clients that need to support backwards compatibility can use this field to build the UserNames and GroupNames. userNames array (string) UserNames holds all the usernames directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. 2.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/clusterrolebindings GET : list objects of kind ClusterRoleBinding POST : create a ClusterRoleBinding /apis/authorization.openshift.io/v1/clusterrolebindings/{name} DELETE : delete a ClusterRoleBinding GET : read the specified ClusterRoleBinding PATCH : partially update the specified ClusterRoleBinding PUT : replace the specified ClusterRoleBinding 2.2.1. /apis/authorization.openshift.io/v1/clusterrolebindings HTTP method GET Description list objects of kind ClusterRoleBinding Table 2.1. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRoleBinding Table 2.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.3. Body parameters Parameter Type Description body ClusterRoleBinding schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 202 - Accepted ClusterRoleBinding schema 401 - Unauthorized Empty 2.2.2. /apis/authorization.openshift.io/v1/clusterrolebindings/{name} Table 2.5. Global path parameters Parameter Type Description name string name of the ClusterRoleBinding HTTP method DELETE Description delete a ClusterRoleBinding Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.7. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRoleBinding Table 2.8. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRoleBinding Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.10. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRoleBinding Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. Body parameters Parameter Type Description body ClusterRoleBinding schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleBinding schema 201 - Created ClusterRoleBinding schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/role_apis/clusterrolebinding-authorization-openshift-io-v1 |
Chapter 6. Recommended minimum hardware requirements for the Red Hat Ceph Storage Dashboard | Chapter 6. Recommended minimum hardware requirements for the Red Hat Ceph Storage Dashboard The Red Hat Ceph Storage Dashboard has minimum hardware requirements. Minimum requirements 4 core processor at 2.5 GHz or higher 8 GB RAM 50 GB hard disk drive 1 Gigabit Ethernet network interface Additional Resources For more information, see High-level monitoring of a Ceph storage cluster in the Administration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/hardware_guide/recommended-minimum-hardware-requirements-for-the-red-hat-ceph-storage-dashboard_hw |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/release_notes/making-open-source-more-inclusive |
Chapter 4. Configuring and deploying the OpenTelemetry instrumentation injection | Chapter 4. Configuring and deploying the OpenTelemetry instrumentation injection Important OpenTelemetry instrumentation injection is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the configuration of the instrumentation. 4.1. OpenTelemetry instrumentation configuration options The Red Hat build of OpenTelemetry can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server ( httpd ). Auto-instrumentation in OpenTelemetry refers to the capability where the framework automatically instruments an application without manual code changes. This enables developers and administrators to get observability into their applications with minimal effort and changes to the existing codebase. Important The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. 4.1.1. Instrumentation options Instrumentation options are specified in the OpenTelemetryCollector custom resource. Sample OpenTelemetryCollector custom resource file apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: "0.25" java: env: - name: OTEL_JAVAAGENT_DEBUG value: "true" Table 4.1. Parameters used by the Operator to define the Instrumentation Parameter Description Values Common environment variables to define across all the instrumentations. Exporter configuration. Propagators defines inter-process context propagation configuration. tracecontext , baggage , b3 , b3multi , jaeger , ottrace , none Resource attributes configuration. Sampling configuration. Configuration for the Apache HTTP Server instrumentation. Configuration for the .NET instrumentation. Configuration for the Go instrumentation. Configuration for the Java instrumentation. Configuration for the Node.js instrumentation. Configuration for the Python instrumentation. 4.1.2. Using the instrumentation CR with Service Mesh When using the instrumentation custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi propagator. 4.1.2.1. Configuration of the Apache HTTP Server auto-instrumentation Table 4.2. Prameters for the .spec.apacheHttpd field Name Description Default Attributes specific to the Apache HTTP Server. Location of the Apache HTTP Server configuration. /usr/local/apache2/conf Environment variables specific to the Apache HTTP Server. Container image with the Apache SDK and auto-instrumentation. The compute resource requirements. Apache HTTP Server version. 2.4 The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-apache-httpd: "true" 4.1.2.2. Configuration of the .NET auto-instrumentation Name Description Environment variables specific to .NET. Container image with the .NET SDK and auto-instrumentation. The compute resource requirements. For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . The .NET autoinstrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-dotnet: "true" 4.1.2.3. Configuration of the Go auto-instrumentation Name Description Environment variables specific to Go. Container image with the Go SDK and auto-instrumentation. The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-go: "true" Additional permissions required for the Go auto-instrumentation in the OpenShift cluster apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - "SYS_PTRACE" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny Tip The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows: USD oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account> 4.1.2.4. Configuration of the Java auto-instrumentation Name Description Environment variables specific to Java. Container image with the Java SDK and auto-instrumentation. The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-java: "true" 4.1.2.5. Configuration of the Node.js auto-instrumentation Name Description Environment variables specific to Node.js. Container image with the Node.js SDK and auto-instrumentation. The compute resource requirements. The PodSpec annotations to enable injection instrumentation.opentelemetry.io/inject-nodejs: "true" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable" The instrumentation.opentelemetry.io/otel-go-auto-target-exe annotation sets the value for the required OTEL_GO_AUTO_TARGET_EXE environment variable. 4.1.2.6. Configuration of the Python auto-instrumentation Name Description Environment variables specific to Python. Container image with the Python SDK and auto-instrumentation. The compute resource requirements. For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . Python auto-instrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-python: "true" 4.1.2.7. Configuration of the OpenTelemetry SDK variables The OpenTelemetry SDK variables in your pod are configurable by using the following annotation: instrumentation.opentelemetry.io/inject-sdk: "true" Note that all the annotations accept the following values: true Injects the Instrumentation resource from the namespace. false Does not inject any instrumentation. instrumentation-name The name of the instrumentation resource to inject from the current namespace. other-namespace/instrumentation-name The name of the instrumentation resource to inject from another namespace. 4.1.2.8. Multi-container pods The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection. Pod annotation instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>" Note The Go auto-instrumentation does not support multi-container auto-instrumentation injection. | [
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"",
"env",
"exporter",
"propagators",
"resource",
"sampler",
"apacheHttpd",
"dotnet",
"go",
"java",
"nodejs",
"python",
"attrs",
"configPath",
"env",
"image",
"resourceRequirements",
"version",
"instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"",
"env",
"image",
"resourceRequirements",
"instrumentation.opentelemetry.io/inject-dotnet: \"true\"",
"env",
"image",
"resourceRequirements",
"instrumentation.opentelemetry.io/inject-go: \"true\"",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny",
"oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>",
"env",
"image",
"resourceRequirements",
"instrumentation.opentelemetry.io/inject-java: \"true\"",
"env",
"image",
"resourceRequirements",
"instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"",
"env",
"image",
"resourceRequirements",
"instrumentation.opentelemetry.io/inject-python: \"true\"",
"instrumentation.opentelemetry.io/inject-sdk: \"true\"",
"instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/red_hat_build_of_opentelemetry/otel-instrumentation |
2.3. Configuring Cluster Components | 2.3. Configuring Cluster Components To configure the components and attributes of a cluster, click on the name of the cluster displayed on the Manage Clusters screen. This brings up the Nodes page, as described in Section 2.3.1, "Cluster Nodes" . This page displays a menu along the top of the page, as shown in Figure 2.3, "Cluster Components Menu" , with the following entries: Nodes , as described in Section 2.3.1, "Cluster Nodes" Resources , as described in Section 2.3.2, "Cluster Resources" Fence Devices , as described in Section 2.3.3, "Fence Devices" ACLs , as described in Section 2.3.4, "Configuring ACLs" Cluster Properties , as described in Section 2.3.5, "Cluster Properties" Figure 2.3. Cluster Components Menu 2.3.1. Cluster Nodes Selecting the Nodes option from the menu along the top of the cluster management page displays the currently configured nodes and the status of the currently selected node, including which resources are running on the node and the resource location preferences. This is the default page that displays when you select a cluster from the Manage Clusters screen. You can add or remove nodes from this page, and you can start, stop, restart, or put a node in standby mode. For information on standby mode, see Section 4.4.5, "Standby Mode" . You can also configure fence devices directly from this page, as described in Section 2.3.3, "Fence Devices" . by selecting Configure Fencing . 2.3.2. Cluster Resources Selecting the Resources option from the menu along the top of the cluster management page displays the currently configured resources for the cluster, organized according to resource groups. Selecting a group or a resource displays the attributes of that group or resource. From this screen, you can add or remove resources, you can edit the configuration of existing resources, and you can create a resource group. To add a new resource to the cluster, click Add . The brings up the Add Resource screen. When you select a resource type from the dropdown Type menu, the arguments you must specify for that resource appear in the menu. You can click Optional Arguments to display additional arguments you can specify for the resource you are defining. After entering the parameters for the resource you are creating, click Create Resource . When configuring the arguments for a resource, a brief description of the argument appears in the menu. If you move the cursor to the field, a longer help description of that argument is displayed. You can define as resource as a cloned resource, or as a master/slave resource. For information on these resource types, see Chapter 9, Advanced Configuration . Once you have created at least one resource, you can create a resource group. For information on resource groups, see Section 6.5, "Resource Groups" . To create a resource group, select a resource that will be part of the group from the Resources screen, then click Create Group . This displays the Create Group screen. Enter a group name and click Create Group . This returns you to the Resources screen, which now displays the group name for the resource. After you have created a resource group, you can indicate that group name as a resource parameter when you create or modify additional resources. 2.3.3. Fence Devices Selecting the Fence Devices option from the menu along the top of the cluster management page displays Fence Devices screen, showing the currently configured fence devices. To add a new fence device to the cluster, click Add . The brings up the Add Fence Device screen. When you select a fence device type from the drop-down Type menu, the arguments you must specify for that fence device appear in the menu. You can click on Optional Arguments to display additional arguments you can specify for the fence device you are defining. After entering the parameters for the new fence device, click Create Fence Instance . For information on configuring fence devices with Pacemaker, see Chapter 5, Fencing: Configuring STONITH . 2.3.4. Configuring ACLs Selecting the ACLS option from the menu along the top of the cluster management page displays a screen from which you can set permissions for local users, allowing read-only or read-write access to the cluster configuration by using access control lists (ACLs). To assign ACL permissions, you create a role and define the access permissions for that role. Each role can have an unlimited number of permissions (read/write/deny) applied to either an XPath query or the ID of a specific element. After defining the role, you can assign it to an existing user or group. 2.3.5. Cluster Properties Selecting the Cluster Properties option from the menu along the top of the cluster management page displays the cluster properties and allows you to modify these properties from their default values. For information on the Pacemaker cluster properties, see Chapter 12, Pacemaker Cluster Properties . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-guiclustcomponents-haar |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.7_release_notes/proc_providing-feedback-on-red-hat-documentation_release-notes |
Appendix G. Ceph Monitor and OSD configuration options | Appendix G. Ceph Monitor and OSD configuration options When modifying heartbeat settings, include them in the [global] section of the Ceph configuration file. mon_osd_min_up_ratio Description The minimum ratio of up Ceph OSD Daemons before Ceph will mark Ceph OSD Daemons down . Type Double Default .3 mon_osd_min_in_ratio Description The minimum ratio of in Ceph OSD Daemons before Ceph will mark Ceph OSD Daemons out . Type Double Default 0.750000 mon_osd_laggy_halflife Description The number of seconds laggy estimates will decay. Type Integer Default 60*60 mon_osd_laggy_weight Description The weight for new samples in laggy estimation decay. Type Double Default 0.3 mon_osd_laggy_max_interval Description Maximum value of laggy_interval in laggy estimations (in seconds). The monitor uses an adaptive approach to evaluate the laggy_interval of a certain OSD. This value will be used to calculate the grace time for that OSD. Type Integer Default 300 mon_osd_adjust_heartbeat_grace Description If set to true , Ceph will scale based on laggy estimations. Type Boolean Default true mon_osd_adjust_down_out_interval Description If set to true , Ceph will scaled based on laggy estimations. Type Boolean Default true mon_osd_auto_mark_in Description Ceph will mark any booting Ceph OSD Daemons as in the Ceph Storage Cluster. Type Boolean Default false mon_osd_auto_mark_auto_out_in Description Ceph will mark booting Ceph OSD Daemons auto marked out of the Ceph Storage Cluster as in the cluster. Type Boolean Default true mon_osd_auto_mark_new_in Description Ceph will mark booting new Ceph OSD Daemons as in the Ceph Storage Cluster. Type Boolean Default true mon_osd_down_out_interval Description The number of seconds Ceph waits before marking a Ceph OSD Daemon down and out if it does not respond. Type 32-bit Integer Default 600 mon_osd_downout_subtree_limit Description The largest CRUSH unit type that Ceph will automatically mark out . Type String Default rack mon_osd_reporter_subtree_level Description This setting defines the parent CRUSH unit type for the reporting OSDs. The OSDs send failure reports to the monitor if they find an unresponsive peer. The monitor may mark the reported OSD down and then out after a grace period. Type String Default host mon_osd_report_timeout Description The grace period in seconds before declaring unresponsive Ceph OSD Daemons down . Type 32-bit Integer Default 900 mon_osd_min_down_reporters Description The minimum number of Ceph OSD Daemons required to report a down Ceph OSD Daemon. Type 32-bit Integer Default 2 osd_heartbeat_address Description A Ceph OSD Daemon's network address for heartbeats. Type Address Default The host address. osd_heartbeat_interval Description How often a Ceph OSD Daemon pings its peers (in seconds). Type 32-bit Integer Default 6 osd_heartbeat_grace Description The elapsed time when a Ceph OSD Daemon has not shown a heartbeat that the Ceph Storage Cluster considers it down . Type 32-bit Integer Default 20 osd_mon_heartbeat_interval Description How often the Ceph OSD Daemon pings a Ceph Monitor if it has no Ceph OSD Daemon peers. Type 32-bit Integer Default 30 osd_mon_report_interval_max Description The maximum time in seconds that a Ceph OSD Daemon can wait before it must report to a Ceph Monitor. Type 32-bit Integer Default 120 osd_mon_report_interval_min Description The minimum number of seconds a Ceph OSD Daemon may wait from startup or another reportable event before reporting to a Ceph Monitor. Type 32-bit Integer Default 5 Valid Range Should be less than osd mon report interval max osd_mon_ack_timeout Description The number of seconds to wait for a Ceph Monitor to acknowledge a request for statistics. Type 32-bit Integer Default 30 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/ceph-monitor-and-osd-configuration-options_conf |
Chapter 7. Installing a cluster on Azure with network customizations | Chapter 7. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.12, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 7.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 7.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 7.5.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 7.4. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 7.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.5.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 7.1. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 7.5.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 7.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 7.5.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 1 10 14 16 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 17 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.5.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the network plugin during phase 2. 7.7. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 7.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 7.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 7.6. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 7.7. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 7.8. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 7.9. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 7.10. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 7.11. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. Note In OpenShift Container Platform {version}, egress IP is only assigned to the primary interface. Consequentially, setting routingViaHost to true will not work for egress IP in OpenShift Container Platform {version}. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 7.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 7.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster. Important You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 7.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.11. Finalizing user-managed encryption after installation If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. Procedure Obtain the identity of the cluster resource group used by the installer: If you specified an existing resource group in install-config.yaml , obtain its Azure identity by running the following command: USD az identity list --resource-group "<existing_resource_group>" If you did not specify a existing resource group in install-config.yaml , locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: USD az group list USD az identity list --resource-group "<installer_created_resource_group>" Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: USD az role assignment create --role "<privileged_role>" \ 1 --assignee "<resource_group_identity>" 2 1 Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 2 Specifies the identity of the cluster resource group. Obtain the id of the disk encryption set you created prior to installation by running the following command: USD az disk-encryption-set show -n <disk_encryption_set_name> \ 1 --resource-group <resource_group_name> 2 1 Specifies the name of the disk encryption set. 2 Specifies the resource group that contains the disk encryption set. The id is in the format of "/subscriptions/... /resourceGroups/... /providers/Microsoft.Compute/diskEncryptionSets/... " . Obtain the identity of the cluster service principal by running the following command: USD az identity show -g <cluster_resource_group> \ 1 -n <cluster_service_principal_name> \ 2 --query principalId --out tsv 1 Specifies the name of the cluster resource group created by the installation program. 2 Specifies the name of the cluster service principal created by the installation program. The identity is in the format of 12345678-1234-1234-1234-1234567890 . Create a role assignment that grants the cluster service principal necessary privileges to the disk encryption set by running the following command: USD az role assignment create --assignee <cluster_service_principal_id> \ 1 --role <privileged_role> \ 2 --scope <disk_encryption_set_id> \ 3 1 Specifies the ID of the cluster service principal obtained in the step. 2 Specifies the Azure role name. You can use the Contributor role or a custom role with the necessary permissions. 3 Specifies the ID of the disk encryption set. Create a storage class that uses the user-managed disk encryption set: Save the following storage class definition to a file, for example storage-class-definition.yaml : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example "/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx" . 2 Specifies the name of the resource group used by the installer. This is the same resource group from the first step. Create the storage class managed-premium from the file you created by running the following command: USD oc create -f storage-class-definition.yaml Select the managed-premium storage class when you create persistent volumes to use encrypted storage. 7.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"az identity list --resource-group \"<existing_resource_group>\"",
"az group list",
"az identity list --resource-group \"<installer_created_resource_group>\"",
"az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2",
"az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2",
"az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv",
"az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role <privileged_role> \\ 2 --scope <disk_encryption_set_id> \\ 3",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer",
"oc create -f storage-class-definition.yaml",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure/installing-azure-network-customizations |
11.5. RAID and Other Disk Devices | 11.5. RAID and Other Disk Devices Important Red Hat Enterprise Linux 6 uses mdraid instead of dmraid for installation onto Intel BIOS RAID sets. These sets are detected automatically, and devices with Intel ISW metadata are recognized as mdraid instead of dmraid. Note that the device node names of any such devices under mdraid are different from their device node names under dmraid . Therefore, special precautions are necessary when you migrate systems with Intel BIOS RAID sets. Local modifications to /etc/fstab , /etc/crypttab or other configuration files which refer to devices by their device node names will not work in Red Hat Enterprise Linux 6. Before migrating these files, you must therefore edit them to replace device node paths with device UUIDs instead. You can find the UUIDs of devices with the blkid command. 11.5.1. Hardware RAID RAID, or Redundant Array of Independent Disks, allows a group, or array, of drives to act as a single device. Configure any RAID functions provided by the mainboard of your computer, or attached controller cards, before you begin the installation process. Each active RAID array appears as one drive within Red Hat Enterprise Linux. On systems with more than one hard drive you may configure Red Hat Enterprise Linux to operate several of the drives as a Linux RAID array without requiring any additional hardware. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-partitioning-raid-ppc |
5.11. Desktop | 5.11. Desktop firefox package In certain environments, storing personal Firefox configuration files (~/.mozilla/) on an NFS share, such as when your home directory is on a NFS share, led to Firefox functioning incorrectly, for example, navigation buttons not working as expected, and bookmarks not saving. This update adds a new configuration option, storage.nfs_filesystem, that can be used to resolve this issue. If you experience this issue: Start Firefox . Type about:config into the URL bar and press the Enter key. If prompted with "This might void your warranty!", click the I'll be careful, I promise! button. Right-click in the Preference Name list. In the menu that opens, select New Boolean . Type "storage.nfs_filesystem" (without quotes) for the preference name and then click the OK button. Select true for the boolean value and then press the OK button. Red_Hat_Enterprise_Linux-Release_Notes-6 component The link in the RELEASE-NOTES-si-LK.html file (provided by the Red_Hat_Enterprise_Linux-Release_Notes-6-si-LK package) incorrectly points at the Beta online version of the 6.4 Release Notes. Because the si-LK language is no longer supported, the link should correctly point to the en-US online 6.4 Release Notes located at: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/6.4_Release_Notes/index.html . libwacom component The Lenovo X220 Tablet Touchscreen is not supported in the kernel shipped with Red Hat Enterprise Linux 6.4. wacomcpl package, BZ# 769466 The wacomcpl package has been deprecated and has been removed from the package set. The wacomcpl package provided graphical configuration of Wacom tablet settings. This functionality is now integrated into the GNOME Control Center. acroread component Running a AMD64 system without the sssd-client.i686 package installed, which uses SSSD for getting information about users, causes acroread to fail to start. To work around this issue, manually install the sssd-client.i686 package. kernel component, BZ# 681257 With newer kernels, such as the kernel shipped in Red Hat Enterprise Linux 6.1, Nouveau has corrected the Transition Minimized Differential Signaling (TMDS) bandwidth limits for pre-G80 NVIDIA chipsets. Consequently, the resolution auto-detected by X for some monitors may differ from that used in Red Hat Enterprise Linux 6.0. fprintd component When enabled, fingerprint authentication is the default authentication method to unlock a workstation, even if the fingerprint reader device is not accessible. However, after a 30 second wait, password authentication will become available. evolution component Evolution's IMAP backend only refreshes folder contents under the following circumstances: when the user switches into or out of a folder, when the auto-refresh period expires, or when the user manually refreshes a folder (that is, using the menu item Folder Refresh ). Consequently, when replying to a message in the Sent folder, the new message does not immediately appear in the Sent folder. To see the message, force a refresh using one of the methods describe above. anaconda component The clock applet in the GNOME panel has a default location of Boston, USA. Additional locations are added via the applet's preferences dialog. Additionally, to change the default location, left-click the applet, hover over the desired location in the Locations section, and click the Set... button that appears. xorg-x11-server component, BZ# 623169 In some multi-monitor configurations (for example, dual monitors with both rotated), the cursor confinement code produces incorrect results. For example, the cursor may be permitted to disappear off the screen when it should not, or be prevented from entering some areas where it should be allowed to go. Currently, the only workaround for this issue is to disable monitor rotation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/desktop_issues |
Chapter 32. Tuning scheduling policy | Chapter 32. Tuning scheduling policy In Red Hat Enterprise Linux, the smallest unit of process execution is called a thread. The system scheduler determines which processor runs a thread, and for how long the thread runs. However, because the scheduler's primary concern is to keep the system busy, it may not schedule threads optimally for application performance. For example, say an application on a NUMA system is running on Node A when a processor on Node B becomes available. To keep the processor on Node B busy, the scheduler moves one of the application's threads to Node B. However, the application thread still requires access to memory on Node A. But, this memory will take longer to access because the thread is now running on Node B and Node A memory is no longer local to the thread. Thus, it may take longer for the thread to finish running on Node B than it would have taken to wait for a processor on Node A to become available, and then to execute the thread on the original node with local memory access. 32.1. Categories of scheduling policies Performance sensitive applications often benefit from the designer or administrator determining where threads are run. The Linux scheduler implements a number of scheduling policies which determine where and for how long a thread runs. The following are the two major categories of scheduling policies: Normal policies Normal threads are used for tasks of normal priority. Realtime policies Realtime policies are used for time-sensitive tasks that must complete without interruptions. Realtime threads are not subject to time slicing. This means the thread runs until they block, exit, voluntarily yield, or are preempted by a higher priority thread. The lowest priority realtime thread is scheduled before any thread with a normal policy. For more information, see Static priority scheduling with SCHED_FIFO and Round robin priority scheduling with SCHED_RR . Additional resources sched(7) , sched_setaffinity(2) , sched_getaffinity(2) , sched_setscheduler(2) , and sched_getscheduler(2) man pages on your system 32.2. Static priority scheduling with SCHED_FIFO The SCHED_FIFO , also called static priority scheduling, is a realtime policy that defines a fixed priority for each thread. This policy allows administrators to improve event response time and reduce latency. It is recommended to not execute this policy for an extended period of time for time sensitive tasks. When SCHED_FIFO is in use, the scheduler scans the list of all the SCHED_FIFO threads in order of priority and schedules the highest priority thread that is ready to run. The priority level of a SCHED_FIFO thread can be any integer from 1 to 99 , where 99 is treated as the highest priority. Red Hat recommends starting with a lower number and increasing priority only when you identify latency issues. Warning Because realtime threads are not subject to time slicing, Red Hat does not recommend setting a priority as 99. This keeps your process at the same priority level as migration and watchdog threads; if your thread goes into a computational loop and these threads are blocked, they will not be able to run. Systems with a single processor will eventually hang in this situation. Administrators can limit SCHED_FIFO bandwidth to prevent realtime application programmers from initiating realtime tasks that monopolize the processor. The following are some of the parameters used in this policy: /proc/sys/kernel/sched_rt_period_us This parameter defines the time period, in microseconds, that is considered to be one hundred percent of the processor bandwidth. The default value is 1000000 ms , or 1 second . /proc/sys/kernel/sched_rt_runtime_us This parameter defines the time period, in microseconds, that is devoted to running real-time threads. The default value is 950000 ms , or 0.95 seconds . 32.3. Round robin priority scheduling with SCHED_RR The SCHED_RR is a round-robin variant of the SCHED_FIFO . This policy is useful when multiple threads need to run at the same priority level. Like SCHED_FIFO , SCHED_RR is a realtime policy that defines a fixed priority for each thread. The scheduler scans the list of all SCHED_RR threads in order of priority and schedules the highest priority thread that is ready to run. However, unlike SCHED_FIFO , threads that have the same priority are scheduled in a round-robin style within a certain time slice. You can set the value of this time slice in milliseconds with the sched_rr_timeslice_ms kernel parameter in the /proc/sys/kernel/sched_rr_timeslice_ms file. The lowest value is 1 millisecond . 32.4. Normal scheduling with SCHED_OTHER The SCHED_OTHER is the default scheduling policy in Red Hat Enterprise Linux 8. This policy uses the Completely Fair Scheduler (CFS) to allow fair processor access to all threads scheduled with this policy. This policy is most useful when there are a large number of threads or when data throughput is a priority, as it allows more efficient scheduling of threads over time. When this policy is in use, the scheduler creates a dynamic priority list based partly on the niceness value of each process thread. Administrators can change the niceness value of a process, but cannot change the scheduler's dynamic priority list directly. 32.5. Setting scheduler policies Check and adjust scheduler policies and priorities by using the chrt command line tool. It can start new processes with the desired properties, or change the properties of a running process. It can also be used for setting the policy at runtime. Procedure View the process ID (PID) of the active processes: Use the --pid or -p option with the ps command to view the details of the particular PID. Check the scheduling policy, PID, and priority of a particular process: Here, 468 and 476 are PID of a process. Set the scheduling policy of a process: For example, to set the process with PID 1000 to SCHED_FIFO , with a priority of 50 : For example, to set the process with PID 1000 to SCHED_OTHER , with a priority of 0 : For example, to set the process with PID 1000 to SCHED_RR , with a priority of 10 : To start a new application with a particular policy and priority, specify the name of the application: Additional resources chrt(1) man page on your system Policy Options for the chrt command Changing the priority of services during the boot process 32.6. Policy options for the chrt command Using the chrt command, you can view and set the scheduling policy of a process. The following table describes the appropriate policy options, which can be used to set the scheduling policy of a process. Table 32.1. Policy Options for the chrt Command Short option Long option Description -f --fifo Set schedule to SCHED_FIFO -o --other Set schedule to SCHED_OTHER -r --rr Set schedule to SCHED_RR 32.7. Changing the priority of services during the boot process Using the systemd service, it is possible to set up real-time priorities for services launched during the boot process. The unit configuration directives are used to change the priority of a service during the boot process. The boot process priority change is done by using the following directives in the service section: CPUSchedulingPolicy= Sets the CPU scheduling policy for executed processes. It is used to set other , fifo , and rr policies. CPUSchedulingPriority= Sets the CPU scheduling priority for executed processes. The available priority range depends on the selected CPU scheduling policy. For real-time scheduling policies, an integer between 1 (lowest priority) and 99 (highest priority) can be used. The following procedure describes how to change the priority of a service, during the boot process, using the mcelog service. Prerequisites Install the TuneD package: Enable and start the TuneD service: Procedure View the scheduling priorities of running threads: Create a supplementary mcelog service configuration directory file and insert the policy name and priority in this file: Reload the systemd scripts configuration: Restart the mcelog service: Verification Display the mcelog priority set by systemd issue: Additional resources systemd(1) and tuna(8) man pages on your system Description of the priority range 32.8. Priority map Priorities are defined in groups, with some groups dedicated to certain kernel functions. For real-time scheduling policies, an integer between 1 (lowest priority) and 99 (highest priority) can be used. The following table describes the priority range, which can be used while setting the scheduling policy of a process. Table 32.2. Description of the priority range Priority Threads Description 1 Low priority kernel threads This priority is usually reserved for the tasks that need to be just above SCHED_OTHER . 2 - 49 Available for use The range used for typical application priorities. 50 Default hard-IRQ value 51 - 98 High priority threads Use this range for threads that execute periodically and must have quick response times. Do not use this range for CPU-bound threads as you will starve interrupts. 99 Watchdogs and migration System threads that must run at the highest priority. 32.9. TuneD cpu-partitioning profile For tuning Red Hat Enterprise Linux 8 for latency-sensitive workloads, Red Hat recommends to use the cpu-partitioning TuneD profile. Prior to Red Hat Enterprise Linux 8, the low-latency Red Hat documentation described the numerous low-level steps needed to achieve low-latency tuning. In Red Hat Enterprise Linux 8, you can perform low-latency tuning more efficiently by using the cpu-partitioning TuneD profile. This profile is easily customizable according to the requirements for individual low-latency applications. The following figure is an example to demonstrate how to use the cpu-partitioning profile. This example uses the CPU and node layout. Figure 32.1. Figure cpu-partitioning You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the following configuration options: Isolated CPUs with load balancing In the cpu-partitioning figure, the blocks numbered from 4 to 23, are the default isolated CPUs. The kernel scheduler's process load balancing is enabled on these CPUs. It is designed for low-latency processes with multiple threads that need the kernel scheduler load balancing. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the isolated_cores=cpu-list option, which lists CPUs to isolate that will use the kernel scheduler load balancing. The list of isolated CPUs is comma-separated or you can specify a range using a dash, such as 3-5 . This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU. Isolated CPUs without load balancing In the cpu-partitioning figure, the blocks numbered 2 and 3, are the isolated CPUs that do not provide any additional kernel scheduler process load balancing. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the no_balance_cores=cpu-list option, which lists CPUs to isolate that will not use the kernel scheduler load balancing. Specifying the no_balance_cores option is optional, however any CPUs in this list must be a subset of the CPUs listed in the isolated_cores list. Application threads using these CPUs need to be pinned individually to each CPU. Housekeeping CPUs Any CPU not isolated in the cpu-partitioning-variables.conf file is automatically considered a housekeeping CPU. On the housekeeping CPUs, all services, daemons, user processes, movable kernel threads, interrupt handlers, and kernel timers are permitted to execute. Additional resources tuned-profiles-cpu-partitioning(7) man page on your system 32.10. Using the TuneD cpu-partitioning profile for low-latency tuning This procedure describes how to tune a system for low-latency using the TuneD's cpu-partitioning profile. It uses the example of a low-latency application that can use cpu-partitioning and the CPU layout as mentioned in the cpu-partitioning figure. The application in this case uses: One dedicated reader thread that reads data from the network will be pinned to CPU 2. A large number of threads that process this network data will be pinned to CPUs 4-23. A dedicated writer thread that writes the processed data to the network will be pinned to CPU 3. Prerequisites You have installed the cpu-partitioning TuneD profile by using the yum install tuned-profiles-cpu-partitioning command as root. Procedure Edit the /etc/tuned/cpu-partitioning-variables.conf file with the following changes: Comment the isolated_cores=USD{f:calc_isolated_cores:1} line: Add the following information for isolated CPUS: Set the cpu-partitioning TuneD profile: Reboot the system. After rebooting, the system is tuned for low-latency, according to the isolation in the cpu-partitioning figure. The application can use taskset to pin the reader and writer threads to CPUs 2 and 3, and the remaining application threads on CPUs 4-23. Verification Verify that the isolated CPUs are not reflected in the Cpus_allowed_list field: To see affinity of all processes, enter: Note TuneD cannot change the affinity of some processes, mostly kernel processes. In this example, processes with PID 4 and 9 remain unchanged. Additional resources tuned-profiles-cpu-partitioning(7) man page 32.11. Customizing the cpu-partitioning TuneD profile You can extend the TuneD profile to make additional tuning changes. For example, the cpu-partitioning profile sets the CPUs to use cstate=1 . In order to use the cpu-partitioning profile but to additionally change the CPU cstate from cstate1 to cstate0, the following procedure describes a new TuneD profile named my_profile , which inherits the cpu-partitioning profile and then sets C state-0. Procedure Create the /etc/tuned/my_profile directory: Create a tuned.conf file in this directory, and add the following content: Use the new profile: Note In the shared example, a reboot is not required. However, if the changes in the my_profile profile require a reboot to take effect, then reboot your machine. Additional resources tuned-profiles-cpu-partitioning(7) man page on your system | [
"ps",
"chrt -p 468 pid 468 's current scheduling policy: SCHED_FIFO pid 468 's current scheduling priority: 85 chrt -p 476 pid 476 's current scheduling policy: SCHED_OTHER pid 476 's current scheduling priority: 0",
"chrt -f -p 50 1000",
"chrt -o -p 0 1000",
"chrt -r -p 10 1000",
"chrt -f 36 /bin/my-app",
"yum install tuned",
"systemctl enable --now tuned",
"tuna --show_threads thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 1 OTHER 0 0xff 3181 292 systemd 2 OTHER 0 0xff 254 0 kthreadd 3 OTHER 0 0xff 2 0 rcu_gp 4 OTHER 0 0xff 2 0 rcu_par_gp 6 OTHER 0 0 9 0 kworker/0:0H-kblockd 7 OTHER 0 0xff 1301 1 kworker/u16:0-events_unbound 8 OTHER 0 0xff 2 0 mm_percpu_wq 9 OTHER 0 0 266 0 ksoftirqd/0 [...]",
"cat << EOF > /etc/systemd/system/mcelog.service.d/priority.conf [Service] CPUSchedulingPolicy= fifo CPUSchedulingPriority= 20 EOF",
"systemctl daemon-reload",
"systemctl restart mcelog",
"tuna -t mcelog -P thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 826 FIFO 20 0,1,2,3 13 0 mcelog",
"isolated_cores=USD{f:calc_isolated_cores:1}",
"All isolated CPUs: isolated_cores=2-23 Isolated CPUs without the kernel's scheduler load balancing: no_balance_cores=2,3",
"tuned-adm profile cpu-partitioning",
"cat /proc/self/status | grep Cpu Cpus_allowed: 003 Cpus_allowed_list: 0-1",
"ps -ae -o pid= | xargs -n 1 taskset -cp pid 1's current affinity list: 0,1 pid 2's current affinity list: 0,1 pid 3's current affinity list: 0,1 pid 4's current affinity list: 0-5 pid 5's current affinity list: 0,1 pid 6's current affinity list: 0,1 pid 7's current affinity list: 0,1 pid 9's current affinity list: 0",
"mkdir /etc/tuned/ my_profile",
"vi /etc/tuned/ my_profile /tuned.conf [main] summary=Customized tuning on top of cpu-partitioning include=cpu-partitioning [cpu] force_latency=cstate.id:0|1",
"tuned-adm profile my_profile"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/tuning-scheduling-policy_monitoring-and-managing-system-status-and-performance |
Chapter 9. Performing advanced builds | Chapter 9. Performing advanced builds The following sections provide instructions for advanced build operations including setting build resources and maximum duration, assigning builds to nodes, chaining builds, build pruning, and build run policies. 9.1. Setting build resources By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited. Procedure You can limit resource use in two ways: Limit resource use by specifying resource limits in the default container limits of a project. Limit resource use by specifying resource limits as part of the build configuration. ** In the following example, each of the resources , cpu , and memory parameters are optional: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: resources: limits: cpu: "100m" 1 memory: "256Mi" 2 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : resources: requests: 1 cpu: "100m" memory: "256Mi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the build process. Otherwise, build pod creation will fail, citing a failure to satisfy quota. 9.2. Setting maximum duration When defining a BuildConfig object, you can define its maximum duration by setting the completionDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced. The maximum duration is counted from the time when a build pod gets scheduled in the system, and defines how long it can be active, including the time needed to pull the builder image. After reaching the specified timeout, the build is terminated by OpenShift Container Platform. Procedure To set maximum duration, specify completionDeadlineSeconds in your BuildConfig . The following example shows the part of a BuildConfig specifying completionDeadlineSeconds field for 30 minutes: spec: completionDeadlineSeconds: 1800 Note This setting is not supported with the Pipeline Strategy option. 9.3. Assigning builds to specific nodes Builds can be targeted to run on specific nodes by specifying labels in the nodeSelector field of a build configuration. The nodeSelector value is a set of key-value pairs that are matched to Node labels when scheduling the build pod. The nodeSelector value can also be controlled by cluster-wide default and override values. Defaults will only be applied if the build configuration does not define any key-value pairs for the nodeSelector and also does not define an explicitly empty map value of nodeSelector:{} . Override values will replace values in the build configuration on a key by key basis. Note If the specified NodeSelector cannot be matched to a node with those labels, the build still stay in the Pending state indefinitely. Procedure Assign builds to run on specific nodes by assigning labels in the nodeSelector field of the BuildConfig , for example: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: nodeSelector: 1 key1: value1 key2: value2 1 Builds associated with this build configuration will run only on nodes with the key1=value2 and key2=value2 labels. 9.4. Chained builds For compiled languages such as Go, C, C++, and Java, including the dependencies necessary for compilation in the application image might increase the size of the image or introduce vulnerabilities that can be exploited. To avoid these problems, two builds can be chained together. One build that produces the compiled artifact, and a second build that places that artifact in a separate image that runs the artifact. In the following example, a source-to-image (S2I) build is combined with a docker build to compile an artifact that is then placed in a separate runtime image. Note Although this example chains a S2I build and a docker build, the first build can use any strategy that produces an image containing the desired artifacts, and the second build can use any strategy that can consume input content from an image. The first build takes the application source and produces an image containing a WAR file. The image is pushed to the artifact-image image stream. The path of the output artifact depends on the assemble script of the S2I builder used. In this case, it is output to /wildfly/standalone/deployments/ROOT.war . apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: "master" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift The second build uses image source with a path to the WAR file inside the output image from the first build. An inline dockerfile copies that WAR file into a runtime image. apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: "." strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange 1 from specifies that the docker build should include the output of the image from the artifact-image image stream, which was the target of the build. 2 paths specifies which paths from the target image to include in the current docker build. 3 The runtime image is used as the source image for the docker build. The result of this setup is that the output image of the second build does not have to contain any of the build tools that are needed to create the WAR file. Also, because the second build contains an image change trigger, whenever the first build is run and produces a new image with the binary artifact, the second build is automatically triggered to produce a runtime image that contains that artifact. Therefore, both builds behave as a single build with two stages. 9.5. Pruning builds By default, builds that have completed their lifecycle are persisted indefinitely. You can limit the number of builds that are retained. Procedure Limit the number of builds that are retained by supplying a positive integer value for successfulBuildsHistoryLimit or failedBuildsHistoryLimit in your BuildConfig , for example: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2 1 successfulBuildsHistoryLimit will retain up to two builds with a status of completed . 2 failedBuildsHistoryLimit will retain up to two builds with a status of failed , canceled , or error . Trigger build pruning by one of the following actions: Updating a build configuration. Waiting for a build to complete its lifecycle. Builds are sorted by their creation timestamp with the oldest builds being pruned first. Note Administrators can manually prune builds using the 'oc adm' object pruning command. 9.6. Build run policy The build run policy describes the order in which the builds created from the build configuration should run. This can be done by changing the value of the runPolicy field in the spec section of the Build specification. It is also possible to change the runPolicy value for existing build configurations, by: Changing Parallel to Serial or SerialLatestOnly and triggering a new build from this configuration causes the new build to wait until all parallel builds complete as the serial build can only run alone. Changing Serial to SerialLatestOnly and triggering a new build causes cancellation of all existing builds in queue, except the currently running build and the most recently created build. The newest build runs . | [
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2",
"resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"",
"spec: completionDeadlineSeconds: 1800",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/builds/advanced-build-operations |
Migration Guide | Migration Guide Red Hat build of Keycloak 22.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/migration_guide/index |
Chapter 8. Configuring an OVS-DPDK deployment | Chapter 8. Configuring an OVS-DPDK deployment This section deploys OVS-DPDK within the Red Hat OpenStack Platform environment. The overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core heat templates on the director node. You must install and configure the undercloud before you can deploy the overcloud. See the Director Installation and Usage Guide for details. Important You must determine the best values for the OVS-DPDK parameters found in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. Note Do not manually edit or change isolated_cores or other values in etc/tuned/cpu-partitioning-variables.conf that the director heat templates modify. 8.1. Deriving DPDK parameters with workflows Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . See Section 7.2, "Workflows and derived parameters" for an overview of the Mistral workflow for DPDK. Prerequisites You must have bare metal introspection, including hardware inspection extras ( inspection_extras ) enabled to provide the data retrieved by this workflow. Hardware inspection extras are enabled by default. For more information about hardware of the nodes, see: Inspecting the hardware of nodes . Define the Workflows and Input Parameters for DPDK The following list outlines the input parameters you can provide to the OVS-DPDK workflows: num_phy_cores_per_numa_node_for_pmd This input parameter specifies the required minimum number of cores for the NUMA node associated with the DPDK NIC. One physical core is assigned for the other NUMA nodes not associated with DPDK NIC. Ensure that this parameter is set to 1. huge_page_allocation_percentage This input parameter specifies the required percentage of total memory, excluding NovaReservedHostMemory , that can be configured as huge pages. The KernelArgs parameter is derived using the calculated huge pages based on the huge_page_allocation_percentage specified. Ensure that this parameter is set to 50. The workflows calculate appropriate DPDK parameter values from these input parameters and the bare-metal introspection details. To define the workflows and input parameters for DPDK: Copy the usr/share/openstack-tripleo-heat-templates/plan-samples/plan-environment-derived-params.yaml file to a local directory and set the input parameters to suit your environment. Run the openstack overcloud deploy command and include the following information: The update-plan-only option The role file and all environment files specific to your environment The plan-environment-derived-parms.yaml file with the --plan-environment-file optional argument The output of this command shows the derived results, which are also merged into the plan-environment.yaml file. Note The OvsDpdkMemoryChannels parameter cannot be derived from introspection details. In most cases, this value should be 4. Deploy the overcloud with the derived parameters To deploy the overcloud with these derived parameters: Copy the derived parameters from the deploy command output to the network-environment.yaml file. Note You must assign at least one CPU with sibling thread on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances. Note These parameters apply to the specific role, ComputeOvsDpdk. You can apply these parameters globally, but role-specific parameters overwrite any global parameters. Deploy the overcloud using the role file and all environment files specific to your environment. Note In a cluster with Compute, ComputeOvsDpdk, and ComputeSriov, the workflow applies the formula only for the ComputeOvsDpdk role, not Compute or ComputeSriovs. 8.2. OVS-DPDK topology With Red Hat OpenStack Platform, you can create custom deployment roles, using the composable roles feature to add or remove services from each role. For more information on Composable Roles, see Composable Services and Custom Roles in Advanced Overcloud Customization . This image shows a example OVS-DPDK topology with two bonded ports for the control plane and data plane: To configure OVS-DPDK, perform the following tasks: If you use composable roles, copy and modify the roles_data.yaml file to add the custom role for OVS-DPDK. Update the appropriate network-environment.yaml file to include parameters for kernel arguments, and DPDK arguments. Update the compute.yaml file to include the bridge for DPDK interface parameters. Update the controller.yaml file to include the same bridge details for DPDK interface parameters. Run the overcloud_deploy.sh script to deploy the overcloud with the DPDK parameters. Note This guide provides examples for CPU assignments, memory allocation, and NIC configurations that can vary from your topology and use case. For more information on hardware and configuration options, see: Network Functions Virtualization Product Guide and Chapter 2, Hardware requirements . Prerequisites OVS 2.10 DPDK 17 A supported NIC. To view the list of supported NICs for NFV, see Section 2.1, "Tested NICs" . Note The Red Hat OpenStack Platform operates in OVS client mode for OVS-DPDK deployments. 8.3. Setting the MTU value for OVS-DPDK interfaces Red Hat OpenStack Platform supports jumbo frames for OVS-DPDK. To set the maximum transmission unit (MTU) value for jumbo frames you must: Set the global MTU value for networking in the network-environment.yaml file. Set the physical DPDK port MTU value in the compute.yaml file. This value is also used by the vhost user interface. Set the MTU value within any guest instances on the Compute node to ensure that you have a comparable MTU value from end to end in your configuration. Note VXLAN packets include an extra 50 bytes in the header. Calculate your MTU requirements based on these additional header bytes. For example, an MTU value of 9000 means the VXLAN tunnel MTU value is 8950 to account for these extra bytes. Note You do not need any special configuration for the physical NIC because the NIC is controlled by the DPDK PMD, and has the same MTU value set by the compute.yaml file. You cannot set an MTU value larger than the maximum value supported by the physical NIC. To set the MTU value for OVS-DPDK interfaces: Set the NeutronGlobalPhysnetMtu parameter in the network-environment.yaml file. parameter_defaults: # MTU global configuration NeutronGlobalPhysnetMtu: 9000 Note Ensure that the NeutronDpdkSocketMemory value in the network-environment.yaml file is large enough to support jumbo frames. For details, see Section 7.4.2, "Memory parameters" . Set the MTU value on the bridge to the Compute node in the controller.yaml file. Set the MTU values for an OVS-DPDK bond in the compute.yaml file: 8.4. Configuring a firewall for security groups Dataplane interfaces require high performance in a stateful firewall. To protect these interfaces, consider deploying a telco-grade firewall as a virtual network function (VNF). To configure control plane interfaces, set the NeutronOVSFirewallDriver parameter to openvswitch . To use the flow-based OVS firewall driver, modify the network-environment.yaml file under parameter_defaults . Example: Use the openstack port set command to disable the OVS firewall driver for dataplane interfaces. Example: 8.5. Setting multiqueue for OVS-DPDK interfaces Note Multiqueue is experimental and unsupported. To set the same number of queues for interfaces in OVS-DPDK on the Compute node, modify the compute.yaml file: 8.6. Known limitations Observe the following limitations when configuring OVS-DPDK with Red Hat OpenStack Platform for NFV: Use Linux bonds for control plane networks. Ensure that both the PCI devices used in the bond are on the same NUMA node for optimum performance. Neutron Linux bridge configuration is not supported by Red Hat. You require huge pages for every instance running on the hosts with OVS-DPDK. If huge pages are not present in the guest, the interface appears but does not function. With OVS-DPDK, there is a performance degradation of services that use tap devices, such as Distributed Virtual Routing (DVR). The resulting performance is not suitable for a production environment. When using OVS-DPDK, all bridges on the same Compute node must be of type ovs_user_bridge . The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge on the same node. 8.7. Creating a flavor and deploying an instance for OVS-DPDK After you configure OVS-DPDK for your Red Hat OpenStack Platform deployment with NFV, you can create a flavor, and deploy an instance using the following steps: Create an aggregate group, and add relevant hosts for OVS-DPDK. Define metadata, for example dpdk=true , that matches defined flavor metadata. Note Pinned CPU instances can be located on the same Compute node as unpinned instances. For more information, see Configuring CPU pinning on the Compute node in the Instances and Images Guide. Create a flavor. Set flavor properties. Note that the defined metadata, dpdk=true , matches the defined metadata in the DPDK aggregate. For details on the emulator threads policy for performance improvements, see: Configure Emulator Threads to run on a Dedicated Physical CPU . Create the network. Optional: If you use multiqueue with OVS-DPDK, set the hw_vif_multiqueue_enabled property on the image that you want to use to create a instance: Deploy an instance. 8.8. Troubleshooting the configuration This section describes the steps to troubleshoot the OVS-DPDK configuration. Review the bridge configuration, and confirm that the bridge has datapath_type=netdev . Confirm that the docker container neutron_ovs_agent is configured to start automatically. Optionally, you can view logs for errors, such as if the container fails to start. Confirm that the Poll Mode Driver CPU mask of the ovs-dpdk is pinned to the CPUs. In case of hyper threading, use sibling CPUs. For example, to check the sibling of CPU4 , run the following command: The sibling of CPU4 is CPU20 , therefore proceed with the following command: Display the status: | [
"workflow_parameters: tripleo.derive_params.v1.derive_parameters: # DPDK Parameters # # Specifies the minimum number of CPU physical cores to be allocated for DPDK # PMD threads. The actual allocation will be based on network config, if # the a DPDK port is associated with a numa node, then this configuration # will be used, else 1. num_phy_cores_per_numa_node_for_pmd: 1 # Amount of memory to be configured as huge pages in percentage. Ouf the # total available memory (excluding the NovaReservedHostMemory), the # specified percentage of the remaining is configured as huge pages. huge_page_allocation_percentage: 50",
"openstack overcloud deploy --templates --update-plan-only -r /home/stack/roles_data.yaml -e /home/stack/<environment-file> ... _#repeat as necessary_ **-p /home/stack/plan-environment-derived-params.yaml**",
"Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: 55ba73f2-2ef4-4da1-94e9-eae2fdc35535 Waiting for messages on queue '472a4180-e91b-4f9e-bd4c-1fbdfbcf414f' with no timeout. Removing the current plan files Uploading new plan files Started Mistral Workflow tripleo.plan_management.v1.update_deployment_plan. Execution ID: 7fa995f3-7e0f-4c9e-9234-dd5292e8c722 Plan updated. Processing templates in the directory /tmp/tripleoclient-SY6RcY/tripleo-heat-templates Invoking workflow (tripleo.derive_params.v1.derive_parameters) specified in plan-environment file Started Mistral Workflow tripleo.derive_params.v1.derive_parameters. Execution ID: 2d4572bf-4c5b-41f8-8981-c84a363dd95b Workflow execution is completed. result: ComputeOvsDpdkParameters: IsolCpusList: 1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31 KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31 NovaReservedHostMemory: 4096 NovaComputeCpuDedicatedSet: 2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31 OvsDpdkCoreList: 0,16,8,24 OvsDpdkMemoryChannels: 4 OvsDpdkSocketMemory: 1024,1024 OvsPmdCoreList: 1,17,9,25",
"DPDK compute node. ComputeOvsDpdkParameters: KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on TunedProfileName: \"cpu-partitioning\" IsolCpusList: \"1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31\" NovaComputeCpuDedicatedSet: ['2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: \"1024,1024\" OvsDpdkMemoryChannels: \"4\" OvsDpdkCoreList: \"0,16,8,24\" OvsPmdCoreList: \"1,17,9,25\"",
"openstack overcloud deploy --templates -r /home/stack/roles_data.yaml -e /home/stack/ <environment-file> ... #repeat as necessary",
"parameter_defaults: # MTU global configuration NeutronGlobalPhysnetMtu: 9000",
"- type: ovs_bridge name: br-link0 use_dhcp: false members: - type: interface name: nic3 mtu: 9000",
"- type: ovs_user_bridge name: br-link0 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 mtu: 9000 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 mtu: 9000 members: - type: interface name: nic5",
"parameter_defaults: NeutronOVSFirewallDriver: openvswitch",
"openstack port set --no-security-group --disable-port-security USD{PORT}",
"- type: ovs_user_bridge name: br-link0 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 mtu: 9000 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 mtu: 9000 members: - type: interface name: nic5",
"openstack aggregate create dpdk_group # openstack aggregate add host dpdk_group [compute-host] # openstack aggregate set --property dpdk=true dpdk_group",
"openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>",
"openstack flavor set <flavor> --property dpdk=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB --property hw:emulator_threads_policy=isolate",
"openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID> openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp",
"openstack image set --property hw_vif_multiqueue_enabled=true <image>",
"openstack server create --flavor <flavor> --image <glance image> --nic net-id=<network ID> <server_name>",
"ovs-vsctl list bridge br0 _uuid : bdce0825-e263-4d15-b256-f01222df96f3 auto_attach : [] controller : [] datapath_id : \"00002608cebd154d\" datapath_type : netdev datapath_version : \"<built-in>\" external_ids : {} fail_mode : [] flood_vlans : [] flow_tables : {} ipfix : [] mcast_snooping_enable: false mirrors : [] name : \"br0\" netflow : [] other_config : {} ports : [52725b91-de7f-41e7-bb49-3b7e50354138] protocols : [] rstp_enable : false rstp_status : {} sflow : [] status : {} stp_enable : false",
"docker inspect neutron_ovs_agent | grep -A1 RestartPolicy \"RestartPolicy\": { \"Name\": \"always\",",
"less /var/log/containers/neutron/openvswitch-agent.log",
"cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list 4,20",
"ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010",
"tuna -t ovs-vswitchd -CP thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 3161 OTHER 0 6 765023 614 ovs-vswitchd 3219 OTHER 0 6 1 0 handler24 3220 OTHER 0 6 1 0 handler21 3221 OTHER 0 6 1 0 handler22 3222 OTHER 0 6 1 0 handler23 3223 OTHER 0 6 1 0 handler25 3224 OTHER 0 6 1 0 handler26 3225 OTHER 0 6 1 0 handler27 3226 OTHER 0 6 1 0 handler28 3227 OTHER 0 6 2 0 handler31 3228 OTHER 0 6 2 4 handler30 3229 OTHER 0 6 2 5 handler32 3230 OTHER 0 6 953538 431 revalidator29 3231 OTHER 0 6 1424258 976 revalidator33 3232 OTHER 0 6 1424693 836 revalidator34 3233 OTHER 0 6 951678 503 revalidator36 3234 OTHER 0 6 1425128 498 revalidator35 *3235 OTHER 0 4 151123 51 pmd37* *3236 OTHER 0 20 298967 48 pmd38* 3164 OTHER 0 6 47575 0 dpdk_watchdog3 3165 OTHER 0 6 237634 0 vhost_thread1 3166 OTHER 0 6 3665 0 urcu2"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/network_functions_virtualization_planning_and_configuration_guide/part-dpdk-configure |
9.5. Configuring Custom Default Values | 9.5. Configuring Custom Default Values Machine-wide default settings can be set by providing a default for a key in a dconf profile. These defaults can be overridden by the user. To set a default for a key, the user profile must exist and the value for the key must be added to a dconf database. Example 9.1. Set the Default Background If it does not already exist, create the user profile in /etc/dconf/profile/user : where local is the name of a dconf database. Create a keyfile for the local database in /etc/dconf/db/local.d/ 01-background , which contains the following default settings: In the default setting of the keyfile , the following GSettings keys are used: Table 9.1. org.gnome.desktop.background schemas GSettings Keys Key Name Possible Values Description picture-options "none", "wallpaper", "centered", "scaled", "stretched", "zoom", "spanned" Determines how the image set by wallpaper_filename is rendered. picture-uri filename with the path URI to use for the background image. Note that the backend only supports local (file://) URIs. primary-color default: 000000 Left or Top color when drawing gradients, or the solid color. secondary-color default: FFFFFF Right or Bottom color when drawing gradients, not used for solid color. Edit the keyfile according to your preferences. For more information, see Section 9.3, "Browsing GSettings Values for Desktop Applications" . Update the system databases: Important When the user profile is created or changed, the user will need to log out and log in again before the changes will be applied. If you want to avoid creating a user profile, you can use the dconf command-line utility to read and write individual values or entire directories from and to a dconf database. For more information, see the dconf (1) man page. 9.5.1. Locking Down Specific Settings The lockdown mode in dconf is a useful tool for preventing users from changing specific settings. To lock down a GSettings key, you will need to create a locks subdirectory in the keyfile directory (for instance, /etc/dconf/db/local.d/locks/ ). The files inside this directory contain a list of keys to lock, and you may add any number of files to this directory. Important If you do not enforce the system settings using a lockdown, users can easily override the system settings with their own. Any settings users have made will take precedence over the system settings unless there is a lockdown enforcing the system settings. The example below demonstrates how to lock settings for the default wallpaper. Follow the procedure for any other setting you need to lock. Example 9.2. Locking Down the Default Wallpaper Set a default wallpaper by following steps in Section 10.5.1, "Customizing the Default Desktop Background" . Create a new directory named /etc/dconf/db/local.d/locks/ . Create a new file in /etc/dconf/db/local.d/locks/00-default-wallpaper with the following contents, listing one key per line: Update the system databases: | [
"user-db:user system-db: local",
"dconf path GSettings key names and their corresponding values picture-uri='file:///usr/local/share/backgrounds/wallpaper.jpg' picture-options='scaled' primary-color='000000' secondary-color='FFFFFF'",
"dconf update",
"Prevent users from changing values for the following keys: /org/gnome/desktop/background/picture-uri /org/gnome/desktop/background/picture-options /org/gnome/desktop/background/primary-color /org/gnome/desktop/background/secondary-color",
"dconf update"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/custom-default-values-system-settings |
Chapter 13. Support | Chapter 13. Support 13.1. Support overview You can request assistance from Red Hat Support, report bugs, collect data about your environment, and monitor the health of your cluster and virtual machines (VMs) with the following tools. 13.1.1. Opening support tickets If you have encountered an issue that requires immediate assistance from Red Hat Support, you can submit a support case. To report a bug, you can create a Jira issue directly. 13.1.1.1. Submitting a support case To request support from Red Hat Support, follow the instructions for submitting a support case . It is helpful to collect debugging data to include with your support request. 13.1.1.1.1. Collecting data for Red Hat Support You can gather debugging information by performing the following steps: Collecting data about your environment Configure Prometheus and Alertmanager and collect must-gather data for OpenShift Container Platform and OpenShift Virtualization. must-gather tool for OpenShift Virtualization Configure and use the must-gather tool. Collecting data about VMs Collect must-gather data and memory dumps from VMs. 13.1.1.2. Creating a Jira issue To report a bug, you can create a Jira issue directly by filling out the form on the Create Issue page. 13.1.2. Web console monitoring You can monitor the health of your cluster and VMs by using the OpenShift Container Platform web console. The web console displays resource usage, alerts, events, and trends for your cluster and for OpenShift Virtualization components and resources. Table 13.1. Web console pages for monitoring and troubleshooting Page Description Overview page Cluster details, status, alerts, inventory, and resource usage Virtualization Overview tab OpenShift Virtualization resources, usage, alerts, and status Virtualization Top consumers tab Top consumers of CPU, memory, and storage Virtualization Migrations tab Progress of live migrations VirtualMachines VirtualMachine VirtualMachine details Metrics tab VM resource usage, storage, network, and migration VirtualMachines VirtualMachine VirtualMachine details Events tab List of VM events VirtualMachines VirtualMachine VirtualMachine details Diagnostics tab VM status conditions and volume snapshot status 13.2. Collecting data for Red Hat Support When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools: must-gather tool The must-gather tool collects diagnostic information, including resource definitions and service logs. Prometheus Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Alertmanager The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems. For information about the OpenShift Container Platform monitoring stack, see About OpenShift Container Platform monitoring . 13.2.1. Collecting data about your environment Collecting data about your environment minimizes the time required to analyze and determine the root cause. Prerequisites Set the retention time for Prometheus metrics data to a minimum of seven days. Configure the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster. Record the exact number of affected nodes and virtual machines. Procedure Collect must-gather data for the cluster . Collect must-gather data for Red Hat OpenShift Data Foundation , if necessary. Collect must-gather data for OpenShift Virtualization . Collect Prometheus metrics for the cluster . 13.2.2. Collecting data about virtual machines Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause. Prerequisites Linux VMs: Install the latest QEMU guest agent . Windows VMs: Record the Windows patch update details. Install the latest VirtIO drivers . Install the latest QEMU guest agent . If Remote Desktop Protocol (RDP) is enabled, connect by using the desktop viewer to determine whether there is a problem with the connection software. Procedure Collect must-gather data for the VMs using the /usr/bin/gather script. Collect screenshots of VMs that have crashed before you restart them. Collect memory dumps from VMs before remediation attempts. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network. 13.2.3. Using the must-gather tool for OpenShift Virtualization You can collect data about OpenShift Virtualization resources by running the must-gather command with the OpenShift Virtualization image. The default data collection includes information about the following resources: OpenShift Virtualization Operator namespaces, including child objects OpenShift Virtualization custom resource definitions Namespaces that contain virtual machines Basic virtual machine definitions Instance types information is not currently collected by default; you can, however, run a command to optionally collect it. Procedure Run the following command to collect data about OpenShift Virtualization: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 \ -- /usr/bin/gather 13.2.3.1. must-gather tool options You can run the oc adm must-gather command to collect must gather images for all the Operators and products deployed on your cluster without the need to explicitly specify the required images. Alternatively, you can specify a combination of scripts and environment variables for the following options: Collecting detailed virtual machine (VM) information from a namespace Collecting detailed information about specified VMs Collecting image, image-stream, and image-stream-tags information Limiting the maximum number of parallel processes used by the must-gather tool 13.2.3.1.1. Parameters Environment variables You can specify environment variables for a compatible script. NS=<namespace_name> Collect virtual machine information, including virt-launcher pod details, from the namespace that you specify. The VirtualMachine and VirtualMachineInstance CR data is collected for all namespaces. VM=<vm_name> Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the NS environment variable. PROS=<number_of_processes> Modify the maximum number of parallel processes that the must-gather tool uses. The default value is 5 . Important Using too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended. Scripts Each script is compatible only with certain environment variable combinations. /usr/bin/gather Use the default must-gather script, which collects cluster data from all namespaces and includes only basic VM information. This script is compatible only with the PROS variable. /usr/bin/gather --vms_details Collect VM log files, VM definitions, control-plane logs, and namespaces that belong to OpenShift Virtualization resources. Specifying namespaces includes their child objects. If you use this parameter without specifying a namespace or VM, the must-gather tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the VM variable. /usr/bin/gather --images Collect image, image-stream, and image-stream-tags custom resource information. This script is compatible only with the PROS variable. /usr/bin/gather --instancetypes Collect instance types information. This information is not currently collected by default; you can, however, optionally collect it. 13.2.3.1.2. Usage and examples Environment variables are optional. You can run a script by itself or with one or more compatible environment variables. Table 13.2. Compatible parameters Script Compatible environment variable /usr/bin/gather * PROS=<number_of_processes> /usr/bin/gather --vms_details * For a namespace: NS=<namespace_name> * For a VM: VM=<vm_name> NS=<namespace_name> * PROS=<number_of_processes> /usr/bin/gather --images * PROS=<number_of_processes> Syntax To collect must-gather logs for all Operators and products on your cluster in a single pass, run the following command: USD oc adm must-gather --all-images If you need to pass additional parameters to individual must-gather images, use the following command: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 \ -- <environment_variable_1> <environment_variable_2> <script_name> Default data collection parallel processes By default, five processes run in parallel. USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 \ -- PROS=5 /usr/bin/gather 1 1 You can modify the number of parallel processes by changing the default. Detailed VM information The following command collects detailed VM information for the my-vm VM in the mynamespace namespace: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 \ -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1 1 The NS environment variable is mandatory if you use the VM environment variable. Image, image-stream, and image-stream-tags information The following command collects image, image-stream, and image-stream-tags information from the cluster: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 \ /usr/bin/gather --images Instance types information The following command collects instance types information from the cluster: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 \ /usr/bin/gather --instancetypes 13.3. Troubleshooting OpenShift Virtualization provides tools and logs for troubleshooting virtual machines (VMs) and virtualization components. You can troubleshoot OpenShift Virtualization components by using the tools provided in the web console or by using the oc CLI tool. 13.3.1. Events OpenShift Container Platform events are records of important life-cycle information and are useful for monitoring and troubleshooting virtual machine, namespace, and resource issues. VM events: Navigate to the Events tab of the VirtualMachine details page in the web console. Namespace events You can view namespace events by running the following command: USD oc get events -n <namespace> See the list of events for details about specific events. Resource events You can view resource events by running the following command: USD oc describe <resource> <resource_name> 13.3.2. Pod logs You can view logs for OpenShift Virtualization pods by using the web console or the CLI. You can also view aggregated logs by using the LokiStack in the web console. 13.3.2.1. Configuring OpenShift Virtualization pod log verbosity You can configure the verbosity level of OpenShift Virtualization pod logs by editing the HyperConverged custom resource (CR). Procedure To set log verbosity for specific components, open the HyperConverged CR in your default text editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the log level for one or more components by editing the spec.logVerbosityConfig stanza. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6 1 The log verbosity value must be an integer in the range 1-9 , where a higher number indicates a more detailed log. In this example, the virtAPI component logs are exposed if their priority level is 5 or higher. Apply your changes by saving and exiting the editor. 13.3.2.2. Viewing virt-launcher pod logs with the web console You can view the virt-launcher pod logs for a virtual machine by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines . Select a virtual machine to open the VirtualMachine details page. On the General tile, click the pod name to open the Pod details page. Click the Logs tab to view the logs. 13.3.2.3. Viewing OpenShift Virtualization pod logs with the CLI You can view logs for the OpenShift Virtualization pods by using the oc CLI tool. Procedure View a list of pods in the OpenShift Virtualization namespace by running the following command: USD oc get pods -n openshift-cnv Example 13.1. Example output NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m View the pod log by running the following command: USD oc logs -n openshift-cnv <pod_name> Note If a pod fails to start, you can use the -- option to view logs from the last attempt. To monitor log output in real time, use the -f option. Example 13.2. Example output {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"} {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373726Z"} {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-04-17T08:58:37.373782Z"} {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-04-17T08:58:37.390221Z"} {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"} {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"} 13.3.3. Guest system logs Viewing the boot logs of VM guests can help diagnose issues. You can configure access to guests' logs and view them by using either the OpenShift Container Platform web console or the oc CLI. This feature is disabled by default. If a VM does not explicitly have this setting enabled or disabled, it inherits the cluster-wide default setting. Important If sensitive information such as credentials or other personally identifiable information (PII) is written to the serial console, it is logged with all other visible text. Red Hat recommends using SSH to send sensitive data instead of the serial console. 13.3.3.1. Enabling default access to VM guest system logs with the web console You can enable default access to VM guest system logs by using the web console. Procedure From the side menu, click Virtualization Overview . Click the Settings tab. Click Cluster Guest management . Set Enable guest system log access to on. 13.3.3.2. Enabling default access to VM guest system logs with the CLI You can enable default access to VM guest system logs by editing the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Update the disableSerialConsoleLog value. For example: kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: virtualMachineOptions: disableSerialConsoleLog: true 1 #... 1 Set the value of disableSerialConsoleLog to false if you want serial console access to be enabled on VMs by default. 13.3.3.3. Setting guest system log access for a single VM with the web console You can configure access to VM guest system logs for a single VM by using the web console. This setting takes precedence over the cluster-wide default configuration. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Configuration tab. Set Guest system log access to on or off. 13.3.3.4. Setting guest system log access for a single VM with the CLI You can configure access to VM guest system logs for a single VM by editing the VirtualMachine CR. This setting takes precedence over the cluster-wide default configuration. Procedure Edit the virtual machine manifest by running the following command: USD oc edit vm <vm_name> Update the value of the logSerialConsole field. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: logSerialConsole: true 1 #... 1 To enable access to the guest's serial console log, set the logSerialConsole value to true . Apply the new configuration to the VM by running the following command: USD oc apply vm <vm_name> Optional: If you edited a running VM, restart the VM to apply the new configuration. For example: USD virtctl restart <vm_name> -n <namespace> 13.3.3.5. Viewing guest system logs with the web console You can view the serial console logs of a virtual machine (VM) guest by using the web console. Prerequisites Guest system log access is enabled. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Diagnostics tab. Click Guest system logs to load the serial console. 13.3.3.6. Viewing guest system logs with the CLI You can view the serial console logs of a VM guest by running the oc logs command. Prerequisites Guest system log access is enabled. Procedure View the logs by running the following command, substituting your own values for <namespace> and <vm_name> : USD oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log 13.3.4. Log aggregation You can facilitate troubleshooting by aggregating and filtering logs. 13.3.4.1. Viewing aggregated OpenShift Virtualization logs with the LokiStack You can view aggregated logs for OpenShift Virtualization pods and containers by using the LokiStack in the web console. Prerequisites You deployed the LokiStack. Procedure Navigate to Observe Logs in the web console. Select application , for virt-launcher pod logs, or infrastructure , for OpenShift Virtualization control plane pods and containers, from the log type list. Click Show Query to display the query field. Enter the LogQL query in the query field and click Run Query to display the filtered logs. 13.3.4.2. OpenShift Virtualization LogQL queries You can view and filter aggregated logs for OpenShift Virtualization components by running Loki Query Language (LogQL) queries on the Observe Logs page in the web console. The default log type is infrastructure . The virt-launcher log type is application . Optional: You can include or exclude strings or regular expressions by using line filter expressions. Note If the query matches a large number of logs, the query might time out. Table 13.3. OpenShift Virtualization LogQL example queries Component LogQL query All {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" cdi-apiserver cdi-deployment cdi-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="storage" hco-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="deployment" kubemacpool {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="network" virt-api virt-controller virt-handler virt-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="compute" ssp-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="schedule" Container {log_type=~".+",kubernetes_container_name=~"<container>|<container>"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" 1 Specify one or more containers separated by a pipe ( | ). virt-launcher You must select application from the log type list before running this query. {log_type=~".+", kubernetes_container_name="compute"}|json |!= "custom-ga-command" 1 1 |!= "custom-ga-command" excludes libvirt logs that contain the string custom-ga-command . ( BZ#2177684 ) You can filter log lines to include or exclude strings or regular expressions by using line filter expressions. Table 13.4. Line filter expressions Line filter expression Description |= "<string>" Log line contains string != "<string>" Log line does not contain string |~ "<regex>" Log line contains regular expression !~ "<regex>" Log line does not contain regular expression Example line filter expression {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |= "error" != "timeout" Additional resources for LokiStack and LogQL About log storage Deploying the LokiStack LogQL log queries in the Grafana documentation 13.3.5. Common error messages The following error messages might appear in OpenShift Virtualization logs: ErrImagePull or ImagePullBackOff Indicates an incorrect deployment configuration or problems with the images that are referenced. 13.3.6. Troubleshooting data volumes You can check the Conditions and Events sections of the DataVolume object to analyze and resolve issues. 13.3.6.1. About data volume conditions and events You can diagnose data volume issues by examining the output of the Conditions and Events sections generated by the command: USD oc describe dv <DataVolume> The Conditions section displays the following Types : Bound Running Ready The Events section provides the following additional information: Type of event Reason for logging Source of the event Message containing additional diagnostic information. The output from oc describe does not always contains Events . An event is generated when the Status , Reason , or Message changes. Both conditions and events react to changes in the state of the data volume. For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well. 13.3.6.2. Analyzing data volume conditions and events By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state. There are many different combinations of conditions. Each must be evaluated in its unique context. Examples of various combinations follow. Bound - A successfully bound PVC displays in this example. Note that the Type is Bound , so the Status is True . If the PVC is not bound, the Status is False . When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the Reason is Bound and Status is True . The Message indicates which PVC owns the data volume. Message , in the Events section, provides further details including how long the PVC has been bound ( Age ) and by what resource ( From ), in this case datavolume-controller : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound Running - In this case, note that Type is Running and Status is False , indicating that an event has occurred that caused an attempted operation to fail, changing the Status from True to False . However, note that Reason is Completed and the Message field indicates Import Complete . In the Events section, the Reason and Message contain additional troubleshooting information about the failed operation. In this example, the Message displays an inability to connect due to a 404 , listed in the Events section's first Warning . From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume: Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found Ready - If Type is Ready and Status is True , then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, the Status is False : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready | [
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 -- /usr/bin/gather",
"oc adm must-gather --all-images",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 -- <environment_variable_1> <environment_variable_2> <script_name>",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 -- PROS=5 /usr/bin/gather 1",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 /usr/bin/gather --images",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 /usr/bin/gather --instancetypes",
"oc get events -n <namespace>",
"oc describe <resource> <resource_name>",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6",
"oc get pods -n openshift-cnv",
"NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m",
"oc logs -n openshift-cnv <pod_name>",
"{\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373695Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373726Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"setting rate limiter to 5 QPS and 10 Burst\",\"pos\":\"virt-handler.go:462\",\"timestamp\":\"2022-04-17T08:58:37.373782Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]\",\"pos\":\"cpu_plugin.go:96\",\"timestamp\":\"2022-04-17T08:58:37.390221Z\"} {\"component\":\"virt-handler\",\"level\":\"warning\",\"msg\":\"host model mode is expected to contain only one model\",\"pos\":\"cpu_plugin.go:103\",\"timestamp\":\"2022-04-17T08:58:37.390263Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"node-labeller is running\",\"pos\":\"node_labeller.go:94\",\"timestamp\":\"2022-04-17T08:58:37.391011Z\"}",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: virtualMachineOptions: disableSerialConsoleLog: true 1 #",
"oc edit vm <vm_name>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: logSerialConsole: true 1 #",
"oc apply vm <vm_name>",
"virtctl restart <vm_name> -n <namespace>",
"oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"storage\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"deployment\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"network\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"compute\"",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"schedule\"",
"{log_type=~\".+\",kubernetes_container_name=~\"<container>|<container>\"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"",
"{log_type=~\".+\", kubernetes_container_name=\"compute\"}|json |!= \"custom-ga-command\" 1",
"{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |= \"error\" != \"timeout\"",
"oc describe dv <DataVolume>",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/virtualization/support |
Chapter 362. Validator Component | Chapter 362. Validator Component Available as of Camel version 1.1 The Validation component performs XML validation of the message body using the JAXP Validation API and based on any of the supported XML schema languages, which defaults to XML Schema Note that the Jing component also supports the following useful schema languages: RelaxNG Compact Syntax RelaxNG XML Syntax The MSV component also supports RelaxNG XML Syntax . 362.1. URI format Where someLocalOrRemoteResource is some URL to a local resource on the classpath or a full URL to a remote resource or resource on the file system which contains the XSD to validate against. For example: msv:org/foo/bar.xsd msv:file:../foo/bar.xsd msv:http://acme.com/cheese.xsd validator:com/mypackage/myschema.xsd Maven users will need to add the following dependency to their pom.xml for this component when using Camel 2.8 or older: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> From Camel 2.9 onwards the Validation component is provided directly in the camel-core. 362.2. Options The Validator component supports 2 options, which are listed below. Name Description Default Type resourceResolverFactory (advanced) To use a custom LSResourceResolver which depends on a dynamic endpoint resource URI ValidatorResource ResolverFactory resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Validator endpoint is configured using URI syntax: with the following path and query parameters: 362.2.1. Path Parameters (1 parameters): Name Description Default Type resourceUri Required URL to a local resource on the classpath,or a reference to lookup a bean in the Registry, or a full URL to a remote resource or resource on the file system which contains the XSD to validate against. String 362.2.2. Query Parameters (11 parameters): Name Description Default Type failOnNullBody (producer) Whether to fail if no body exists. true boolean failOnNullHeader (producer) Whether to fail if no header exists when validating against a header. true boolean headerName (producer) To validate against a header instead of the message body. String errorHandler (advanced) To use a custom org.apache.camel.processor.validation.ValidatorErrorHandler. The default error handler captures the errors and throws an exception. ValidatorErrorHandler resourceResolver (advanced) To use a custom LSResourceResolver. See also setResourceResolverFactory(ValidatorResourceResolverFactory) LSResourceResolver resourceResolverFactory (advanced) For creating a resource resolver which depends on the endpoint resource URI. Must not be used in combination with method setResourceResolver(LSResourceResolver). If not set then DefaultValidatorResourceResolverFactory is used ValidatorResource ResolverFactory schemaFactory (advanced) To use a custom javax.xml.validation.SchemaFactory SchemaFactory schemaLanguage (advanced) Configures the W3C XML Schema Namespace URI. http://www.w3.org/2001/XMLSchema String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean useDom (advanced) Whether DOMSource/DOMResult or SaxSource/SaxResult should be used by the validator. false boolean useSharedSchema (advanced) Whether the Schema instance should be shared or not. This option is introduced to work around a JDK 1.6.x bug. Xerces should not have this issue. true boolean 362.3. Example The following example shows how to configure a route from endpoint direct:start which then goes to one of two endpoints, either mock:valid or mock:invalid based on whether or not the XML matches the given schema (which is supplied on the classpath). 362.4. Advanced: JMX method clearCachedSchema Since Camel 2.17 , you can force that the cached schema in the validator endpoint is cleared and reread with the process call with the JMX operation clearCachedSchema. `You can also use this method to programmatically clear the cache. This method is available on the `ValidatorEndpoint `class .` | [
"validator:someLocalOrRemoteResource",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"validator:resourceUri"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/validator-component |
Chapter 7. Using service accounts in applications | Chapter 7. Using service accounts in applications 7.1. Service accounts overview A service account is an Red Hat OpenShift Service on AWS account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the Red Hat OpenShift Service on AWS CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. 7.2. Default service accounts Your Red Hat OpenShift Service on AWS cluster contains default service accounts for cluster management and generates more service accounts for each project. 7.2.1. Default cluster service accounts Several infrastructure controllers run using service account credentials. The following service accounts are created in the Red Hat OpenShift Service on AWS infrastructure project ( openshift-infra ) at server start, and given the following roles cluster-wide: Service account Description replication-controller Assigned the system:replication-controller role deployment-controller Assigned the system:deployment-controller role build-controller Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods. 7.2.2. Default project service accounts and roles Three service accounts are automatically created in each project: Service account Usage builder Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry. Note The builder service account is not created if the Build cluster capability is not enabled. deployer Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project. Note The deployer service account is not created if the DeploymentConfig cluster capability is not enabled. default Used to run all other pods unless they specify a different service account. All service accounts in a project are given the system:image-puller role, which allows pulling images from any image stream in the project using the internal container image registry. 7.2.3. Automatically generated image pull secrets By default, Red Hat OpenShift Service on AWS creates an image pull secret for each service account. Note Prior to Red Hat OpenShift Service on AWS 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with Red Hat OpenShift Service on AWS 4.16, this service account API token secret is no longer created. After upgrading to 4, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 7.3. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none> | [
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/using-service-accounts |
Installing and configuring | Installing and configuring Red Hat OpenShift Pipelines 1.15 Installing and configuring OpenShift Pipelines Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/installing_and_configuring/index |
2.5. Network Configuration | 2.5. Network Configuration Figure 2.8. Network Configuration If the system to be installed via kickstart does not have an Ethernet card, do not configure one on the Network Configuration page. Networking is only required if you choose a networking-based installation method (NFS, FTP, or HTTP). Networking can always be configured after installation with the Network Administration Tool ( system-config-network ). Refer to Chapter 17, Network Configuration for details. For each Ethernet card on the system, click Add Network Device and select the network device and network type for the device. Select eth0 to configure the first Ethernet card, eth1 for the second Ethernet card, and so on. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/rhkstool-network_configuration |
Chapter 5. Customizing the Ceph Storage cluster | Chapter 5. Customizing the Ceph Storage cluster Director deploys containerized Red Hat Ceph Storage using a default configuration. You can customize Ceph Storage by overriding the default settings. Prerequistes To deploy containerized Ceph Storage you must include the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml file during overcloud deployment. This environment file defines the following resources: CephAnsibleDisksConfig - This resource maps the Ceph Storage node disk layout. For more information, see Section 5.3, "Mapping the Ceph Storage node disk layout" . CephConfigOverrides - This resource applies all other custom settings to your Ceph Storage cluster. Use these resources to override any defaults that the director sets for containerized Ceph Storage. Procedure Enable the Red Hat Ceph Storage 4 Tools repository: Install the ceph-ansible package on the undercloud: To customize your Ceph Storage cluster, define custom parameters in a new environment file, for example, /home/stack/templates/ceph-config.yaml . You can apply Ceph Storage cluster settings with the following syntax in the parameter_defaults section of your environment file: Note You can apply the CephConfigOverrides parameter to the [global] section of the ceph.conf file, as well as any other section, such as [osd] , [mon] , and [client] . If you specify a section, the key:value data goes into the specified section. If you do not specify a section, the data goes into the [global] section by default. For information about Ceph Storage configuration, customization, and supported parameters, see Red Hat Ceph Storage Configuration Guide . Replace KEY and VALUE with the Ceph cluster settings that you want to apply. For example, in the global section, max_open_files is the KEY and 131072 is the corresponding VALUE : This configuration results in the following settings defined in the configuration file of your Ceph cluster: 5.1. Setting ceph-ansible group variables The ceph-ansible tool is a playbook used to install and manage Ceph Storage clusters. The ceph-ansible tool has a group_vars directory that defines configuration options and the default settings for those options. Use the group_vars directory to set Ceph Storage parameters. For information about the group_vars directory, see Installing a Red Hat Ceph Storage cluster in the Installation Guide . Procedure To change the variable defaults in director, use the CephAnsibleExtraConfig parameter to pass the new values in heat environment files. For example, to set the ceph-ansible group variable journal_size to 40960, create an environment file with the following journal_size definition: Important Change ceph-ansible group variables with the override parameters; do not edit group variables directly in the /usr/share/ceph-ansible directory on the undercloud. 5.2. Ceph containers for Red Hat OpenStack Platform with Ceph Storage To configure Red Hat OpenStack Platform (RHOSP) to use Red Hat Ceph Storage with NFS Ganesha, you must have a Ceph container. To be compatible with Red Hat Enterprise Linux 8, RHOSP 16 requires Red Hat Ceph Storage 4 or 5 (Ceph package 14.x or Ceph package 16.x). The Ceph Storage 4 and 5 containers are hosted at registry.redhat.io , a registry that requires authentication. For more information, see Container image preparation parameters . 5.3. Mapping the Ceph Storage node disk layout When you deploy containerized Ceph Storage, you must map the disk layout and specify dedicated block devices for the Ceph OSD service. You can perform this mapping in the environment file that you created earlier to define your custom Ceph parameters: /home/stack/templates/ceph-config.yaml . Use the CephAnsibleDisksConfig resource in parameter_defaults to map your disk layout. This resource uses the following variables: Variable Required? Default value (if unset) Description osd_scenario Yes lvm NOTE: The default value is lvm . The lvm value allows ceph-ansible to use ceph-volume to configure OSDs, block.db , and BlueStore WAL devices. devices Yes NONE. Variable must be set. A list of block devices that you want to use for OSDs on the node. dedicated_devices Yes (only if osd_scenario is non-collocated ) devices A list of block devices that maps each entry in the devices parameter to a dedicated journaling block device. You can use this variable only when osd_scenario=non-collocated . dmcrypt No false Sets whether data stored on OSDs is encrypted ( true ) or unencrypted ( false ). osd_objectstore No bluestore NOTE: The default value is bluestore . Sets the storage back end used by Ceph. NOTE: Although the value defaults to bluestore , you can set osd_scenario to filestore in either collated or non-collated scenarios. You can set the value to filestore in a non-collated scenario where dedicated_devices identifies the journaling disks. You can set the value to filestore in a collated scenario in which you partition the disks defined in devices and store both OSD data and journaling data on the same device. 5.3.1. Using BlueStore Procedure To specify the block devices that you want to use as Ceph OSDs, use a variation of the following snippet: Because /dev/nvme0n1 is in a higher performing device class, the example parameter_defaults produces three OSDs that run on /dev/sdb , /dev/sdc , and /dev/sdd . The three OSDs use /dev/nvme0n1 as the block.db and BlueStore WAL device. The ceph-volume tool does this by using the batch subcommand. The same configuration is duplicated for each Ceph Storage node and assumes uniform hardware. If the block.db and BlueStore WAL data reside on the same disks as the OSDs, then change the parameter defaults in the following way: 5.3.2. Referring to devices with persistent names Procedure In some nodes, disk paths, such as /dev/sdb and /dev/sdc , might not point to the same block device during reboots. If this is the case with your Ceph Storage nodes, specify each disk with the /dev/disk/by-id/ symlink to ensure consistent block device mapping throughout your deployments: Optional: Because you must set the list of OSD devices before overcloud deployment, it might not be possible to identify and set the PCI path of disk devices. In this case, gather the /dev/disk/by-id/ symlink data for block devices during introspection. In the following example, run the first command to download the introspection data from the undercloud Object Storage service (swift) for the server b08-h03-r620-hci and save the data in a file called b08-h03-r620-hci.json . Run the second command to grep for by-id . The output of this command contains the unique /dev/disk/by-id values that you can use to identify disks. For more information about naming conventions for storage devices, see Overview of persistent naming attributes in the Managing storage devices guide. 5.3.3. Configuring OSDs in advanced scenarios In an environment file, you list the block devices that you want to use for OSDs in the devices variable of the CephAnsibleDisksConfig resource. When you use the devices variable without any other device configuration parameter, ceph-volume lvm batch automatically optimizes OSD configuration by evenly sharing the higher performance device as a block.db for the slower devices. You can use the following procedures to configure devices to avoid running in ceph-volume lvm batch mode. 5.3.3.1. Using a block.db to improve performance Using a block.db can improve the performance of your Ceph Storage cluster by increasing throughput and improving response times. A block.db is a database that consists of data segments and BlueStore write-ahead logs (WAL). Procedure Add the following content to an environment file: This configures four OSDs: sda , sdb , sdc , and sdd . Each pair has its own database: nvem0n1 and nvme0n2 . Note The order of the devices in the devices list is significant. List the drives followed by the block.db and BlueStore WAL (DB-WAL) device. In the example, nvme0n1 is the DB-WAL for sda and sdb and nvme0n2 is the DB-WAL for sdc and sdd . For more information, see Using BlueStore . Include the environment file that contains your new content in the deployment command with the -e option when you deploy the overcloud. 5.3.3.2. Using dedicated write-ahead log (WAL) devices You can specify a dedicated write-ahead log (WAL) device. Using devices , dedicated_devices , and bluestore_wal_devices together means that you can isolate all components of an OSD on to separate devices, which can increase performance. In the following example procedure, another additional dictionary, bluestore_wal_devices , isolates the write-ahead log on NVMe devices nvme0n1 and nvme0n2 . Procedure Add the following content to an environment file: Include the environment file that contains your new content in the deployment command with the -e option when you deploy the overcloud. 5.3.3.3. Using pre-created LVMs for increased control In the advanced scenarios, ceph-volume uses different types of device lists to create logical volumes for OSDs. You can also create the logical volumes before ceph-volume runs and then pass ceph-volume an lvm_volumes list of those logical volumes. Although this requires that you create the logical volumes in advance, it means that you have more precise control. Because director is also responsible for hardware provisioning, you must create these LVMs in advance by using a first-boot script. Procedure Create an environment file, /home/stack/templates/firstboot.yaml , that registers your heat template as the OS::TripleO::NodeUserData resource type and contains the following content: Create an environment file, /home/stack/templates/ceph-lvm.yaml . Add a list similar to the following example, which includes three physical volumes. If your devices list is longer, expand the example according to your requirements. Use the lvm_volumes parameter instead of the devices list in the following way. This assumes that the volume groups and logical volumes are already created. A typical use case in this scenario is that the WAL and DB LVs are on SSDs and the data LV is on HDDs: Include the environment files that contain your new content in the deployment command with the -e option when you deploy the overcloud. Note Specifying a separate WAL device is necessary only if that WAL device resides on hardware that performs better than the DB device. Usually creating a separate DB device is sufficient and the same partition is then used for the WAL function. 5.4. Assigning custom attributes to different Ceph pools Use the CephPools parameter to apply different attributes to each Ceph Storage pool or create a new custom pool. Procedure Replace POOL with the name of the pool that you want to configure: Configure placement groups by doing one of the following: To manually override the default settings, set pg_num to the number of placement groups: Alternatively, to automatically scale placement groups, set pg_autoscale_mode to True and set target_size_ratio to a percentage relative to your expected Ceph Storage requirements: Replace PERCENTAGE with a decimal. For example, 0.5 equals 50 percent. The total percentage must equal 1.0 or 100 percent. The following values are for example only: For more information, see The placement group autoscaler in the Red Hat Ceph Storage Installation Guide . Specify the application type. The application type for Compute, Block Storage, and Image Storage is`rbd`. However, depending on what you use the pool for, you can specify a different application type. For example, the application type for the gnocchi metrics pool is openstack_gnocchi . For more information, see Enable Application in the Storage Strategies Guide . Note If you do not use the CephPools parameter, director sets the appropriate application type automatically, but only for the default pool list. Optional: Add a pool called custompool to create a custom pool, and set the parameters specific to the needs of your environment: This creates a new custom pool in addition to the default pools. Tip For typical pool configurations of common Ceph use cases, see the Ceph Placement Groups (PGs) per Pool Calculator . This calculator is normally used to generate the commands for manually configuring your Ceph pools. In this deployment, the director configures the pools based on your specifications. Warning Red Hat Ceph Storage 3 (Luminous) introduced a hard limit on the maximum number of PGs an OSD can have, which is 200 by default. Do not override this parameter beyond 200. If there is a problem because the Ceph PG number exceeds the maximum, adjust the pg_num per pool to address the problem, not the mon_max_pg_per_osd . 5.5. Overriding parameters for dissimilar Ceph Storage nodes All nodes with a role that host Ceph OSDs, such as CephStorage or ComputeHCI , use the global devices and dedicated_devices lists created in Section 5.3, "Mapping the Ceph Storage node disk layout" . These lists assume all of these servers have the same hardware. If there are servers with hardware this is not the same, you must update director with the details of the different devices and dedicated_devices lists using node-specific disk configuration. Note Roles that host Ceph OSDs include the OS::TripleO::Services::CephOSD service in the roles_data.yaml file. Ceph Storage nodes that do not have the same hardware as other nodes can cause performance issues. The more variance there is between a standard node and a node that you configure with node-specific overrides in your Red Hat OpenStack Platform (RHOSP) environment, the larger the possible performance penalty. 5.5.1. Node-specific disk configuration Director must be configured for services with that do not have the same hardware. This is called node-specific disk configuration. You can create your node-specific disk configuration by using one of the following methods: Automatic: You can generate a JSON heat environment file to automatically create the node-specific disk configuration. Manual: You can alter the node disk layout to create the node-specific disk configuration. 5.5.1.1. Generating a JSON heat environment file for Ceph devices You can use the /usr/share/openstack-tripleo-heat-templates/tools/make_ceph_disk_list.py script to create a valid JSON heat environment file automatically from the introspection data of the Bare Metal Provisioning service (ironic). Use this JSON file to pass a node-specific disk configuration to director. Procedure Export the introspection data from the Bare Metal Provisioning service for the Ceph nodes that you want to deploy: Copy the utility to the home directory of the stack user on the undercloud and use it to generate a node_data_lookup.json file. Pass the introspection data file from the openstack baremetal introspection data save command for all nodes that host Ceph OSDs to the utility because you can only define NodeDataLookup once during a deployment. The -i option can take an expression like *.json or a list of files as input. Use the -k option to define the key of the Bare Metal Provisioning disk data structure that you want to use to identify your OSD disk. Do not use name because it produces a file of devices like /dev/sdd , which might not always point to the same device during a reboot. Instead, use by_path . This is the default if you do not specify -k . The Bare Metal Provisioning service reserves one of the available disks on the system as the root disk. The utility always excludes the root disk from the list of generated devices. Optional: You can use ./make_ceph_disk_list.py -help to see other available options. Include the node_data_lookup.json file with any other environment files that are relevant to your environment when you deploy the overcloud: 5.5.1.2. Altering the disk layout in Ceph Storage nodes Important Non-homogeneous Ceph Storage nodes can cause performance issues. The more variance there is between a standard node and a node that you configure with node-specific overrides in your Red Hat OpenStack Platform (RHOSP) environment, the larger the possible performance penalty. To pass a node-specific disk configuration to director, you must pass a heat environment file, such as node-spec-overrides.yaml , to the openstack overcloud deploy command and the file content must identify each server by a machine-unique UUID and a list of local variables to override the global variables. You can extract the machine-unique UUID for each individual server or from the Bare Metal Provisioning service (ironic) database. Note In the following procedure you create a valid YAML environment file that contains embedded valid JSON. You can also generate a full JSON file with make_ceph_disk_list.py and pass it to the deployment command as if it were YAML. For more information, see Generating a JSON heat environment file for Ceph devices . Procedure To locate the UUID for an individual server, log in to the server and enter the following command: To extract the UUID from the the Bare Metal Provisioning service database, enter the following command on the undercloud: Warning If the undercloud.conf does not have inspection_extras = true before undercloud installation or upgrade and introspection, then the machine-unique UUID is not in the Bare Metal Provisioning service database. Important The machine-unique UUID is not the Bare Metal Provisioning service UUID. A valid node-spec-overrides.yaml file might look like the following: All lines after the first two lines must be valid JSON. Use the jq command to verify that the JSON is valid. Remove the first two lines ( parameter_defaults: and NodeDataLookup: ) from the file temporarily. Enter cat node-spec-overrides.yaml | jq . As the node-spec-overrides.yaml file grows, you can also use the jq command to ensure that the embedded JSON is valid. For example, because the devices and dedicated_devices list must be the same length, use the following command to verify that they are the same length before you start the deployment. In the following example, the node-spec-c05-h17-h21-h25-6048r.yaml has three servers in rack c05 in which slots h17, h21, and h25 are missing disks. After the JSON is validated, add back the two lines that make it a valid environment YAML file ( parameter_defaults: and NodeDataLookup: ) and include it with -e in the deployment command. In the following example, the updated heat environment file uses NodeDataLookup for Ceph deployment. All of the servers had a devices list with 35 disks except one of them had a disk missing. This environment file overrides the default devices list for only that single node and supplies the node with the list of 34 disks that it must use instead of the global list. 5.5.2. Altering the BlueStore block.db size The BlueStore block.db is a database of data segments and BlueStore write-ahead logs (WAL). There are two methods for altering the database size. Select one of these methods to alter the size. 5.5.2.1. Altering the BlueStore block.db size when you use ceph-volume Use the following procedure to override the block.db size when you use ceph-volume . ceph-volume is used when osd_scenario: lvm . ceph-volume automatically sets the block.db size. However, you can override the block.db size for advanced scenarios. The following example uses a ceph-ansible host variable, not a Ceph configuration file override, so that the block_db_size that is used is passed to the ceph-volume call. Procedure Create a JSON environment file with content similar to the following but replace the values according to your requirements: Include the JSON file with any other environment files that are relevant to your environment when you deploy the overcloud: 5.5.2.2. Altering the BlueStore block.db size when you use ceph-disk Use the following procedure to override the block.db size when you use ceph-disk . ceph-disk is used when osd_scenario: non-collocated or osd_scenario: collocated . The following example uses a Ceph configuration override for specific nodes to set the blustore_block_db_size . This Ceph configuration option is ignored when you use ceph-volume , however ceph-disk uses this configuration option. Procedure Create a JSON environment file with content similar to the following but replace the values according to your requirements: Include the JSON file with any other environment files that are relevant to your environment when you deploy the overcloud: 5.6. Increasing the restart delay for large Ceph clusters During deployment, Ceph services such as OSDs and Monitors, are restarted and the deployment does not continue until the service is running again. Ansible waits 15 seconds (the delay) and checks 5 times for the service to start (the retries). If the service does not restart, the deployment stops so the operator can intervene. Depending on the size of the Ceph cluster, you may need to increase the retry or delay values. The exact names of these parameters and their defaults are as follows: Procedure Update the CephAnsibleExtraConfig parameter to change the default delay and retry values: This example makes the cluster check 30 times and wait 40 seconds between each check for the Ceph OSDs, and check 20 times and wait 10 seconds between each check for the Ceph MONs. To incorporate the changes, pass the updated yaml file with -e using openstack overcloud deploy . 5.7. Overriding Ansible environment variables The Red Hat OpenStack Platform Workflow service (mistral) uses Ansible to configure Ceph Storage, but you can customize the Ansible environment by using Ansible environment variables. Procedure To override an ANSIBLE_* environment variable, use the CephAnsibleEnvironmentVariables heat template parameter. This example configuration increases the number of forks and SSH retries: For more information about Ansible environment variables, see Ansible Configuration Settings . For more information about how to customize your Ceph Storage cluster, see Customizing the Ceph Storage cluster . 5.8. Enabling Ceph on-wire encryption Starting with Red Hat Ceph Storage 4 and later, you can enable encryption for all Ceph traffic over the network with the introduction of the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, giving you end-to-end encryption. Note This feature is intended for use with Red Hat OpenStack Platform (RHOSP) versions 16.1 and later. It is not supported on RHOSP version 13 deployments that use external Red Hat Ceph Storage version 4. For more information, see Ceph on-wire encryption in the Red Hat Ceph Storage Architecture Guide . Procedure To enable Ceph on-wire encryption in RHOSP, configure the following parameter in a new or an existing custom environment file: After you update the environment file, redeploy the overcloud: After you implement this change, director configures the Ceph Storage cluster with the following settings: For more information about Ceph on-wire encryption, see Ceph on-wire encryption in the Architecture Guide . | [
"sudo subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms",
"sudo dnf install ceph-ansible",
"parameter_defaults: CephConfigOverrides: section: KEY:VALUE",
"parameter_defaults: CephConfigOverrides: global: max_open_files: 131072 osd: osd_scrub_during_recovery: false",
"[global] max_open_files = 131072 [osd] osd_scrub_during_recovery = false",
"parameter_defaults: CephAnsibleExtraConfig: journal_size: 40960",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd - /dev/nvme0n1 osd_scenario: lvm osd_objectstore: bluestore",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/disk/by-id/scsi-362cea7f05e14510026ee46fa2111caa9 - /dev/disk/by-id/scsi-362cea7f05e14510026ee47012171567e dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1",
"openstack baremetal introspection data save b08-h03-r620-hci | jq . > b08-h03-r620-hci.json grep by-id b08-h03-r620-hci.json",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sda - /dev/sdb - /dev/nvme0n1 - /dev/sdc - /dev/sdd - /dev/nvme0n2 osd_scenario: lvm osd_objectstore: bluestore",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sda - /dev/sdb dedicated_devices: - /dev/sdx - /dev/sdy bluestore_wal_devices: - /dev/nvme0n1 - /dev/nvme0n2",
"resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/ceph-lvm.yaml",
"heat_template_version: 2014-10-16 description: > Extra hostname configuration resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: ceph_lvm_config} ceph_lvm_config: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash -x pvcreate /dev/sda vgcreate ceph_vg_hdd /dev/sda pvcreate /dev/sdb vgcreate ceph_vg_ssd /dev/sdb pvcreate /dev/nvme0n1 vgcreate ceph_vg_nvme /dev/nvme0n1 lvcreate -n ceph_lv_wal1 -L 50G ceph_vg_nvme lvcreate -n ceph_lv_db1 -L 500G ceph_vg_ssd lvcreate -n ceph_lv_data1 -L 5T ceph_vg_hdd lvs outputs: OS::stack_id: value: {get_resource: userdata}",
"parameter_defaults: CephAnsibleDisksConfig: osd_objectstore: bluestore osd_scenario: lvm lvm_volumes: - data: ceph_lv_data1 data_vg: ceph_vg_hdd db: ceph_lv_db1 db_vg: ceph_vg_ssd wal: ceph_lv_wal1 wal_vg: ceph_vg_nvme",
"parameter_defaults: CephPools: - name: POOL",
"parameter_defaults: CephPools: - name: POOL pg_num: 128 application: rbd",
"parameter_defaults: CephPools: - name: POOL pg_autoscale_mode: True target_size_ratio: PERCENTAGE application: rbd",
"paramter_defaults: CephPools: - {\"name\": backups, \"target_size_ratio\": 0.1, \"pg_autoscale_mode\": True, \"application\": rbd} - {\"name\": volumes, \"target_size_ratio\": 0.5, \"pg_autoscale_mode\": True, \"application\": rbd} - {\"name\": vms, \"target_size_ratio\": 0.2, \"pg_autoscale_mode\": True, \"application\": rbd} - {\"name\": images, \"target_size_ratio\": 0.2, \"pg_autoscale_mode\": True, \"application\": rbd}",
"parameter_defaults: CephPools: - name: custompool pg_num: 128 application: rbd",
"openstack baremetal introspection data save oc0-ceph-0 > ceph0.json openstack baremetal introspection data save oc0-ceph-1 > ceph1.json",
"./make_ceph_disk_list.py -i ceph*.json -o node_data_lookup.json -k by_path",
"openstack overcloud deploy --templates ... -e <existing_overcloud_environment_files> -e node_data_lookup.json ...",
"dmidecode -s system-uuid",
"openstack baremetal introspection data save NODE-ID | jq .extra.system.product.uuid",
"parameter_defaults: NodeDataLookup: {\"32E87B4C-C4A7-418E-865B-191684A6883B\": {\"devices\": [\"/dev/sdc\"]}}",
"(undercloud) [stack@b08-h02-r620 tht]USD cat node-spec-c05-h17-h21-h25-6048r.yaml | jq '.[] | .devices | length' 33 30 33 (undercloud) [stack@b08-h02-r620 tht]USD cat node-spec-c05-h17-h21-h25-6048r.yaml | jq '.[] | .dedicated_devices | length' 33 30 33 (undercloud) [stack@b08-h02-r620 tht]USD",
"parameter_defaults: # c05-h01-6048r is missing scsi-0:2:35:0 (00000000-0000-0000-0000-0CC47A6EFD0C) NodeDataLookup: { \"00000000-0000-0000-0000-0CC47A6EFD0C\": { \"devices\": [ \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:32:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:2:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:3:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:4:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:5:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:6:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:33:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:7:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:8:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:34:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:9:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:10:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:11:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:12:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:13:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:14:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:15:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:16:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:17:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:18:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:19:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:20:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:21:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:22:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:23:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:24:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:25:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:26:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:27:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:28:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:29:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:30:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:31:0\" ], \"dedicated_devices\": [ \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\" ] } }",
"{ \"parameter_defaults\": { \"NodeDataLookup\": { \"32e87b4c-c4a7-41be-865b-191684a6883b\": { \"block_db_size\": 3221225472 }, \"ea6a84d6-cf89-4fe2-b7bd-869b3fe4dd6b\": { \"block_db_size\": 3221225472 } } } }",
"openstack overcloud deploy --templates ... -e <existing_overcloud_environment_files> -e <json_environment_file> ...",
"{ \"parameter_defaults\": { \"NodeDataLookup\": { \"32e87b4c-c4a7-41be-865b-191684a6883b\": { \"ceph_conf_overrides\": { \"osd\": { \"bluestore_block_db_size\": 3221225472 } } }, \"ea6a84d6-cf89-4fe2-b7bd-869b3fe4dd6b\": { \"ceph_conf_overrides\": { \"osd\": { \"bluestore_block_db_size\": 3221225472 } } } } } }",
"openstack overcloud deploy --templates ... -e <existing_overcloud_environment_files> -e <json_environment_file> ...",
"health_mon_check_retries: 5 health_mon_check_delay: 15 health_osd_check_retries: 5 health_osd_check_delay: 15",
"parameter_defaults: CephAnsibleExtraConfig: health_osd_check_delay: 40 health_osd_check_retries: 30 health_mon_check_delay: 20 health_mon_check_retries: 10",
"parameter_defaults: CephAnsibleEnvironmentVariables: ANSIBLE_SSH_RETRIES: '6' DEFAULT_FORKS: '35'",
"parameter_defaults: CephMsgrSecureMode: true",
"openstack overcloud deploy --templates -e <environment_file>",
"ms_cluster_mode: secure ms_service_mode: secure ms_client_mode: secure"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_an_overcloud_with_containerized_red_hat_ceph/configuring_ceph_storage_cluster_settings |
Chapter 3. Introduction to the IdM command-line utilities | Chapter 3. Introduction to the IdM command-line utilities Learn more about the basics of using the Identity Management (IdM) command-line utilities. Prerequisites Installed and accessible IdM server. For details, see Installing Identity Management . To use the IPA command-line interface, authenticate to IdM with a valid Kerberos ticket. For details about obtaining a valid Kerberos ticket, see Logging in to Identity Management from the command line . 3.1. What is the IPA command-line interface The IPA command-line interface (CLI) is the basic command-line interface for Identity Management (IdM) administration. It supports a lot of subcommands for managing IdM, such as the ipa user-add command to add a new user. IPA CLI allows you to: Add, manage, or remove users, groups, hosts and other objects in the network. Manage certificates. Search entries. Display and list objects. Set access rights. Get help with the correct command syntax. 3.2. What is the IPA help The IPA help is a built-in documentation system for the IdM server. The IPA command-line interface (CLI) generates available help topics from loaded IdM plugin modules. To use the IPA help utility, you must: Have an IdM server installed and running. Be authenticated with a valid Kerberos ticket. Entering the ipa help command without options displays information about basic help usage and the most common command examples. You can use the following options for different ipa help use cases: [] - Brackets mean that all parameters are optional and you can write just ipa help and the command will be executed. | - The pipe character means or . Therefore, you can specify a TOPIC , a COMMAND , or topics , or commands , with the basic ipa help command: topics - You can run the command ipa help topics to display a list of topics that are covered by the IPA help, such as user , cert , server and many others. TOPIC - The TOPIC with capital letters is a variable. Therefore, you can specify a particular topic, for example, ipa help user . commands - You can enter the command ipa help commands to display a list of commands which are covered by the IPA help, for example, user-add , ca-enable , server-show and many others. COMMAND - The COMMAND with capital letters is a variable. Therefore, you can specify a particular command, for example, ipa help user-add . 3.3. Using IPA help topics The following procedure describes how to use the IPA help in the command-line interface. Procedure Open a terminal and connect to the IdM server. Enter ipa help topics to display a list of topics covered by help. Select one of the topics and create a command according to the following pattern: ipa help [topic_name] . Instead of the topic_name string, add one of the topics you listed in the step. In the example, we use the following topic: user If the IPA help output is too long and you cannot see the whole text, use the following syntax: You can then scroll down and read the whole help. The IPA CLI displays a help page for the user topic. After reading the overview, you can see many examples with patterns for working with topic commands. 3.4. Using IPA help commands The following procedure describes how to create IPA help commands in the command-line interface. Procedure Open a terminal and connect to the IdM server. Enter ipa help commands to display a list of commands covered by help. Select one of the commands and create a help command according to the following pattern: ipa help <COMMAND> . Instead of the <COMMAND> string, add one of the commands you listed in the step. Additional resources ipa man page on your system 3.5. Structure of IPA commands The IPA CLI distinguishes the following types of commands: Built-in commands - Built-in commands are all available in the IdM server. Plug-in provided commands The structure of IPA commands allows you to manage various types of objects. For example: Users, Hosts, DNS records, Certificates, and many others. For most of these objects, the IPA CLI includes commands to: Add ( add ) Modify ( mod ) Delete ( del ) Search ( find ) Display ( show ) Commands have the following structure: ipa user-add , ipa user-mod , ipa user-del , ipa user-find , ipa user-show ipa host-add , ipa host-mod , ipa host-del , ipa host-find , ipa host-show ipa dnsrecord-add , ipa dnsrecord-mod , ipa dnsrecord-del , ipa dnsrecord-find , ipa dnrecord-show You can create a user with the ipa user-add [options] , where [options] are optional. If you use just the ipa user-add command, the script asks you for details one by one. To change an existing object, you need to define the object, therefore the command also includes an object: ipa user-mod USER_NAME [options] . 3.6. Using an IPA command to add a user account to IdM The following procedure describes how to add a new user to the Identity Management (IdM) database using the command line. Prerequisites You need to have administrator privileges to add user accounts to the IdM server. Procedure Open a terminal and connect to the IdM server. Enter the command for adding a new user: The command runs a script that prompts you to provide basic data necessary for creating a user account. In the First name: field, enter the first name of the new user and press the Enter key. In the Last name: field, enter the last name of the new user and press the Enter key. In the User login [suggested user name]: enter the user name, or just press the Enter key to accept the suggested user name. The user name must be unique for the whole IdM database. If an error occurs because that user name already exists, repeat the process with the ipa user-add command and use a different, unique user name. After you add the user name, the user account is added to the IdM database and the IPA command-line interface (CLI) prints the following output: Note By default, a user password is not set for the user account. To add a password while creating a user account, use the ipa user-add command with the following syntax: The IPA CLI then prompts you to add or confirm a user name and password. If the user has been created already, you can add the password with the ipa user-mod command. Additional resources Run the ipa help user-add command for more information about parameters. 3.7. Using an IPA command to modify a user account in IdM You can change many parameters for each user account. For example, you can add a new password to the user. Basic command syntax is different from the user-add syntax because you need to define the existing user account for which you want to perform changes, for example, add a password. Prerequisites You need to have administrator privileges to modify user accounts. Procedure Open a terminal and connect to the IdM server. Enter the ipa user-mod command, specify the user to modify, and any options, such as --password for adding a password: The command runs a script where you can add the new password. Enter the new password and press the Enter key. The IPA CLI prints the following output: The user password is now set for the account and the user can log into IdM. Additional resources Run the ipa help user-mod command for more information about parameters. 3.8. How to supply a list of values to the IdM utilities Identity Management (IdM) stores values for multi-valued attributes in lists. IdM supports the following methods of supplying multi-valued lists: Using the same command-line argument multiple times within the same command invocation: Alternatively, you can enclose the list in curly braces, in which case the shell performs the expansion: The examples above show a command permission-add which adds permissions to an object. The object is not mentioned in the example. Instead of ... you need to add the object for which you want to add permissions. When you update such multi-valued attributes from the command line, IdM completely overwrites the list of values with a new list. Therefore, when updating a multi-valued attribute, you must specify the whole new list, not just a single value you want to add. For example, in the command above, the list of permissions includes reading, writing and deleting. When you decide to update the list with the permission-mod command, you must add all values, otherwise those not mentioned will be deleted. Example 1: - The ipa permission-mod command updates all previously added permissions. or Example 2 - The ipa permission-mod command deletes the --right=delete argument because it is not included in the command: or 3.9. How to use special characters with the IdM utilities When passing command-line arguments that include special characters to the ipa commands, escape these characters with a backslash (\). For example, common special characters include angle brackets (< and >), ampersand (&), asterisk (*), or vertical bar (|). For example, to escape an asterisk (*): Commands containing unescaped special characters do not work as expected because the shell cannot properly parse such characters. | [
"ipa help [TOPIC | COMMAND | topics | commands]",
"ipa help topics",
"ipa help user",
"ipa help user | less",
"ipa help commands",
"ipa help user-add",
"ipa user-add",
"---------------------- Added user \"euser\" ---------------------- User login: euser First name: Example Last name: User Full name: Example User Display name: Example User Initials: EU Home directory: /home/euser GECOS: Example User Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 427200006 GID: 427200006 Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa user-add --first=Example --last=User --password",
"ipa user-mod euser --password",
"---------------------- Modified user \"euser\" ---------------------- User login: euser First name: Example Last name: User Home directory: /home/euser Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 427200006 GID: 427200006 Password: True Member of groups: ipausers Kerberos keys available: True",
"ipa permission-add --right=read --permissions=write --permissions=delete",
"ipa permission-add --right={read,write,delete}",
"ipa permission-mod --right=read --right=write --right=delete",
"ipa permission-mod --right={read,write,delete}",
"ipa permission-mod --right=read --right=write",
"ipa permission-mod --right={read,write}",
"ipa certprofile-show certificate_profile --out= exported\\*profile.cfg"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/accessing_identity_management_services/introduction-to-the-ipa-command-line-utilities_accessing-idm-services |
Chapter 9. Monitoring the Network Observability Operator | Chapter 9. Monitoring the Network Observability Operator You can use the web console to monitor alerts related to the health of the Network Observability Operator. 9.1. Health dashboards Metrics about health and resource usage of the Network Observability Operator are located in the Observe Dashboards page in the web console. You can view metrics about the health of the Operator in the following categories: Flows per second Sampling Errors last minute Dropped flows per second Flowlogs-pipeline statistics Flowlogs-pipleine statistics views eBPF agent statistics views Operator statistics Resource usage 9.2. Health alerts A health alert banner that directs you to the dashboard can appear on the Network Traffic and Home pages if an alert is triggered. Alerts are generated in the following cases: The NetObservLokiError alert occurs if the flowlogs-pipeline workload is dropping flows because of Loki errors, such as if the Loki ingestion rate limit has been reached. The NetObservNoFlows alert occurs if no flows are ingested for a certain amount of time. The NetObservFlowsDropped alert occurs if the Network Observability eBPF agent hashmap table is full, and the eBPF agent processes flows with degraded performance, or when the capacity limiter is triggered. 9.3. Viewing health information You can access metrics about health and resource usage of the Network Observability Operator from the Dashboards page in the web console. Prerequisites You have the Network Observability Operator installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Dashboards . From the Dashboards dropdown, select Netobserv/Health . View the metrics about the health of the Operator that are displayed on the page. 9.3.1. Disabling health alerts You can opt out of health alerting by editing the FlowCollector resource: In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Add spec.processor.metrics.disableAlerts to disable health alerts, as in the following YAML sample: apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1 1 You can specify one or a list with both types of alerts to disable. 9.4. Using the eBPF agent alert An alert, NetObservAgentFlowsDropped , is triggered when the Network Observability eBPF agent hashmap table is full or when the capacity limiter is triggered. If you see this alert, consider increasing the cacheMaxFlows in the FlowCollector , as shown in the following example. Note Increasing the cacheMaxFlows might increase the memory usage of the eBPF agent. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the Network Observability Operator , select Flow Collector . Select cluster , and then select the YAML tab. Increase the spec.agent.ebpf.cacheMaxFlows value, as shown in the following YAML sample: 1 Increase the cacheMaxFlows value from its value at the time of the NetObservAgentFlowsDropped alert. | [
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: cacheMaxFlows: 200000 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_observability/network-observability-operator-monitoring |
Chapter 39. InternalServiceTemplate schema reference | Chapter 39. InternalServiceTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate ipFamilyPolicy Specifies the IP Family Policy used by the service. Available options are SingleStack , PreferDualStack and RequireDualStack . SingleStack is for a single IP family. PreferDualStack is for two IP families on dual-stack configured clusters or a single IP family on single-stack clusters. RequireDualStack fails unless there are two IP families on dual-stack configured clusters. If unspecified, OpenShift will choose the default value based on the service type. string (one of [RequireDualStack, SingleStack, PreferDualStack]) ipFamilies Specifies the IP Families used by the service. Available options are IPv4 and IPv6 . If unspecified, OpenShift will choose the default value based on the ipFamilyPolicy setting. string (one or more of [IPv6, IPv4]) array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-internalservicetemplate-reference |
5.8. Modifying Resource Parameters | 5.8. Modifying Resource Parameters To modify the parameters of a configured resource, use the following command. The following sequence of commands show the initial values of the configured parameters for resource VirtualIP , the command to change the value of the ip parameter, and the values following the update command. | [
"pcs resource update resource_id [ resource_options ]",
"pcs resource show VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.168.0.120 cidr_netmask=24 Operations: monitor interval=30s pcs resource update VirtualIP ip=192.169.0.120 pcs resource show VirtualIP Resource: VirtualIP (type=IPaddr2 class=ocf provider=heartbeat) Attributes: ip=192.169.0.120 cidr_netmask=24 Operations: monitor interval=30s"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/resourcemodify |
Chapter 7. Configure Network Bonding | Chapter 7. Configure Network Bonding Red Hat Enterprise Linux 7 allows administrators to bind multiple network interfaces together into a single, bonded, channel. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. Warning The use of direct cable connections without network switches is not supported for bonding. The failover mechanisms described here will not work as expected without the presence of network switches. See the Red Hat Knowledgebase article Why is bonding in not supported with direct connection using crossover cables? for more information. Note The active-backup, balance-tlb and balance-alb modes do not require any specific configuration of the switch. Other bonding modes require configuring the switch to aggregate the links. For example, a Cisco switch requires EtherChannel for Modes 0, 2, and 3, but for Mode 4 LACP and EtherChannel are required. See the documentation supplied with your switch and see https://www.kernel.org/doc/Documentation/networking/bonding.txt 7.1. Understanding the Default Behavior of Controller and Port Interfaces When controlling bonded port interfaces using the NetworkManager daemon, and especially when fault finding, keep the following in mind: Starting the controller interface does not automatically start the port interfaces. Starting a port interface always starts the controller interface. Stopping the controller interface also stops the port interfaces. A controller without ports can start static IP connections. A controller without ports waits for ports when starting DHCP connections. A controller with a DHCP connection waiting for ports completes when a port with a carrier is added. A controller with a DHCP connection waiting for ports continues waiting when a port without a carrier is added. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-Configure_Network_Bonding |
2.6. iostat | 2.6. iostat The iostat tool, provided by the sysstat package, monitors and reports on system input/output device loading to help administrators make decisions about how to balance input/output load between physical disks. The iostat tool reports on processor or device utilization since iostat was last run, or since boot. You can focus the output of these reports on specific devices by using the parameters defined in the iostat (1) manual page. For detailed information on the await value and what can cause its values to be high, see the following Red Hat Knowledgebase article: What exactly is the meaning of value "await" reported by iostat? | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-iostat |
Chapter 9. ConsoleYAMLSample [console.openshift.io/v1] | Chapter 9. ConsoleYAMLSample [console.openshift.io/v1] Description ConsoleYAMLSample is an extension for customizing OpenShift web console YAML samples. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required metadata spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleYAMLSampleSpec is the desired YAML sample configuration. Samples will appear with their descriptions in a samples sidebar when creating a resources in the web console. 9.1.1. .spec Description ConsoleYAMLSampleSpec is the desired YAML sample configuration. Samples will appear with their descriptions in a samples sidebar when creating a resources in the web console. Type object Required description targetResource title yaml Property Type Description description string description of the YAML sample. snippet boolean snippet indicates that the YAML sample is not the full YAML resource definition, but a fragment that can be inserted into the existing YAML document at the user's cursor. targetResource object targetResource contains apiVersion and kind of the resource YAML sample is representating. title string title of the YAML sample. yaml string yaml is the YAML sample to display. 9.1.2. .spec.targetResource Description targetResource contains apiVersion and kind of the resource YAML sample is representating. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 9.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleyamlsamples DELETE : delete collection of ConsoleYAMLSample GET : list objects of kind ConsoleYAMLSample POST : create a ConsoleYAMLSample /apis/console.openshift.io/v1/consoleyamlsamples/{name} DELETE : delete a ConsoleYAMLSample GET : read the specified ConsoleYAMLSample PATCH : partially update the specified ConsoleYAMLSample PUT : replace the specified ConsoleYAMLSample 9.2.1. /apis/console.openshift.io/v1/consoleyamlsamples HTTP method DELETE Description delete collection of ConsoleYAMLSample Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleYAMLSample Table 9.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSampleList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleYAMLSample Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body ConsoleYAMLSample schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 201 - Created ConsoleYAMLSample schema 202 - Accepted ConsoleYAMLSample schema 401 - Unauthorized Empty 9.2.2. /apis/console.openshift.io/v1/consoleyamlsamples/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the ConsoleYAMLSample HTTP method DELETE Description delete a ConsoleYAMLSample Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleYAMLSample Table 9.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleYAMLSample Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleYAMLSample Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body ConsoleYAMLSample schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 201 - Created ConsoleYAMLSample schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/console_apis/consoleyamlsample-console-openshift-io-v1 |
Chapter 4. Reviewing inventories with automation content navigator | Chapter 4. Reviewing inventories with automation content navigator As a content creator, you can review your Ansible inventory with automation content navigator and interactively delve into the groups and hosts. 4.1. Reviewing inventory from automation content navigator You can review Ansible inventories with the automation content navigator text-based user interface in interactive mode and delve into groups and hosts for more details. Prerequisites A valid inventory file or an inventory plugin. Procedure Start automation content navigator. USD ansible-navigator Optional: type ansible-navigator inventory -i simple_inventory.yml from the command line to view the inventory. Review the inventory. :inventory -i simple_inventory.yml TITLE DESCRIPTION 0│Browse groups Explore each inventory group and group members members 1│Browse hosts Explore the inventory with a list of all hosts Type 0 to brows the groups. NAME TAXONOMY TYPE 0│general all group 1│nodes all group 2│ungrouped all group The TAXONOMY field details the hierarchy of groups the selected group or node belongs to. Type the number corresponding to the group you want to delve into. NAME TAXONOMY TYPE 0│node-0 all▸nodes host 1│node-1 all▸nodes host 2│node-2 all▸nodes host Type the number corresponding to the host you want to delve into, or type :<number> for numbers greater than 9. [node-1] 0│--- 1│ansible_host: node-1.example.com 2│inventory_hostname: node-1 Verification Review the inventory output. TITLE DESCRIPTION 0│Browse groups Explore each inventory group and group members members 1│Browse hosts Explore the inventory with a list of all hosts Additional resources ansible-inventory . How to build your inventory . | [
"ansible-navigator",
":inventory -i simple_inventory.yml TITLE DESCRIPTION 0│Browse groups Explore each inventory group and group members members 1│Browse hosts Explore the inventory with a list of all hosts",
"NAME TAXONOMY TYPE 0│general all group 1│nodes all group 2│ungrouped all group",
"NAME TAXONOMY TYPE 0│node-0 all▸nodes host 1│node-1 all▸nodes host 2│node-2 all▸nodes host",
"[node-1] 0│--- 1│ansible_host: node-1.example.com 2│inventory_hostname: node-1",
"TITLE DESCRIPTION 0│Browse groups Explore each inventory group and group members members 1│Browse hosts Explore the inventory with a list of all hosts"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_content_navigator/assembly-review-inventory-navigator_ansible-navigator |
Chapter 4. About the Job Explorer | Chapter 4. About the Job Explorer The Job Explorer provides a detailed view of jobs run on automation controller clusters across your organizations. You can access the Job Explorer by selecting Automation Analytics Job Explorer from the navigation panel or using the drill-down view available across each of the application's charts. Using the Job Explorer you can: Filter the types of jobs running in a cluster or organization; Directly link out to templates on automation controller for further assessment; Identify and review job failures; View more details for top templates running on a cluster; Filter out nested workflows and jobs. You can review the features and details of the Job Explorer in the following sections. 4.1. Creating a filtered and sorted view of jobs You can view a list of jobs, filtered by attributes you choose, using the Job Explorer . Filter options include: Status Job Cluster Organization Inventory Template You can sort results by any of the parameters from each column using the directional arrows. Procedure From the navigation panel, select Automation Analytics Job Explorer . In the filter toolbar, select Job from the Filter by list. In that same toolbar, select a time range. Job Explorer will now display jobs within that time range. To further refine results, return to the filter toolbar and select a different attribute to filter results by, including job status, cluster, or organization. The Job Explorer view updates and presents a list of jobs based on the attributes you selected. 4.1.1. Viewing more information about an individual job You can click on the arrow icon to the job Id/Name column to view more details related to that job. 4.1.2. Reviewing job details on automation controller Click the job in the Id/Name column to view the job itself on the automation controller job details page. For more information on job settings for automation controller, see Jobs in automation controller in the Using automation execution . 4.2. Drilling down into cluster data You can drill down into cluster data to review more detailed information about successful or failed jobs. The detailed view, presented on the Job Explorer page, provides information on the cluster, organization, template, and job type. Filters you select on the Clusters view carry over to the Job Explorer page. Details on those job templates will appear in the Job Explorer view, modified by any filters you select in the Clusters view. For example, you can drill down to review details for failed jobs in a cluster. See below to learn more. 4.2.1. Example: Reviewing failed jobs You can view more detail about failed jobs across your organization by drilling down on the graph on the Cluster view and using the Job Explorer to refine results. Clicking on a specific portion in a graph will open that information in the Job Explorer , preserving contextual information created when using filters on the Clusters view. Procedure From the navigation panel, select Automation Analytics Clusters . Using the filter lists in the toolbar, you can apply filters for clusters and time range of your choosing. Click on a segment on the graph. You are redirected to the Job Explorer view, and presented with a list of successful and failed jobs corresponding to that day on the bar graph. To view only failed jobs: Select Status from the Filter by list. Select the Failed filter. The view is updated to show only failed jobs run on that day. Add additional context to the view by applying additional filters and selecting attributes to sort results. Link out and review more information for failed jobs on the automation controller job details page. 4.3. Viewing top templates job details for a specific cluster You can view job instances for top templates in a cluster to learn more about individual job runs associated with that template or to apply filters to further drill down into the data. Procedure From the navigation panel, select Automation Analytics Clusters . Click on a template name in Top Templates . Click View all jobs in the modal that appears. The Job Explorer page displays all jobs on the chosen cluster associated with that template. The view presented will preserve the contextual information of the template based on the parameters selected in the Clusters view. 4.4. Ignoring nested workflows and jobs Select the settings icon on the Job Explorer view and use the toggle switch to Ignore nested workflows and jobs . This option filters out duplicate workflow and job template entries and excludes those items from overall totals. Note About nested workflows Nested workflows allow you to create workflow job templates that call other workflow job templates. Nested workflows promotes reuse, as modular components, of workflows that include existing business logic and organizational requirements in automating complex processes and operations. To learn more about nested workflows, see Workflows in automation controller in the Using automation execution . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_analytics/assembly-using-job-explorer |
Chapter 4. Important update on odo | Chapter 4. Important update on odo Red Hat does not provide information about odo on the OpenShift Container Platform documentation site. See the documentation maintained by Red Hat and the upstream community for documentation information related to odo . Important For the materials maintained by the upstream community, Red Hat provides support under Cooperative Community Support . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cli_tools/developer-cli-odo |
Chapter 2. Acknowledgments | Chapter 2. Acknowledgments Red Hat Ceph Storage version 4.3 contains many contributions from the Red Hat Ceph Storage team. In addition, the Ceph project is seeing amazing growth in the quality and quantity of contributions from individuals and organizations in the Ceph community. We would like to thank all members of the Red Hat Ceph Storage team, all of the individual contributors in the Ceph community, and additionally, but not limited to, the contributions from organizations such as: Intel(R) Fujitsu (R) UnitedStack Yahoo TM Ubuntu Kylin Mellanox (R) CERN TM Deutsche Telekom Mirantis (R) SanDisk TM SUSE | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/release_notes/acknowledgments |
Authorization Services Guide | Authorization Services Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/authorization_services_guide/index |
5.2.2. Securing Connectivity to Storage | 5.2.2. Securing Connectivity to Storage You can connect virtualized systems to networked storage in many different ways. Each approach presents different security benefits and concerns, however the same security principles apply to each: authenticate the remote store pool before use, and protect the confidentiality and integrity of the data while it is being transferred. The data must also remain secure while it is stored. Before storing, Red Hat recommends data be encrypted or digitally signed, or both. Note For more information on networked storage, refer to the Red Hat Enterprise Linux Virtualization Administration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/sect-virtualization_security_guide-network_security_in_a_virtualized_environment-network_security_recommended_practices-securing_connectivity_to_storage |
Chapter 7. Networking | Chapter 7. Networking Troubleshoot networking issues. 7.1. Issue - The default subnet used in Ansible Automation Platform containers conflicts with the internal network The default subnet used in Ansible Automation Platform containers conflicts with the internal network resulting in "No route to host" errors. To resolve this issue, update the default classless inter-domain routing (CIDR) value so it does not conflict with the CIDR used by the default Podman networking plugin. Procedure In all controller and hybrid nodes, run the following commands to create a file called custom.py : # touch /etc/tower/conf.d/custom.py # chmod 640 /etc/tower/conf.d/custom.py # chown root:awx /etc/tower/conf.d/custom.py Add the following to the /etc/tower/conf.d/custom.py file: DEFAULT_CONTAINER_RUN_OPTIONS = ['--network', 'slirp4netns:enable_ipv6=true,cidr=192.0.2.0/24'] 192.0.2.0/24 is the value for the new CIDR in this example. Stop and start the automation controller service in all controller and hybrid nodes: # automation-controller-service stop # automation-controller-service start All containers will start on the new CIDR. | [
"touch /etc/tower/conf.d/custom.py",
"chmod 640 /etc/tower/conf.d/custom.py",
"chown root:awx /etc/tower/conf.d/custom.py",
"DEFAULT_CONTAINER_RUN_OPTIONS = ['--network', 'slirp4netns:enable_ipv6=true,cidr=192.0.2.0/24']",
"automation-controller-service stop",
"automation-controller-service start"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/troubleshooting_ansible_automation_platform/troubleshoot-networking |
Chapter 4. Mapping Domain Objects to the Index Structure | Chapter 4. Mapping Domain Objects to the Index Structure 4.1. Basic Mapping In Red Hat JBoss Data Grid, the identifier for all @Indexed objects is the key used to store the value. How the key is indexed can still be customized by using a combination of @Transformable , @ProvidedId , custom types and custom FieldBridge implementations. The @DocumentId identifier does not apply to JBoss Data Grid values. The Lucene-based Query API uses the following common annotations to map entities: @Indexed @Field @NumericField Report a bug 4.1.1. @Indexed The @Indexed annotation declares a cached entry indexable. All entries not annotated with @Indexed are ignored. Example 4.1. Making a class indexable with @Indexed Optionally, specify the index attribute of the @Indexed annotation to change the default name of the index. 23151%2C+Infinispan+Query+Guide-6.608-09-2016+09%3A23%3A32JBoss+Data+Grid+6Documentation6.6.1 Report a bug 4.1.2. @Field Each property or attribute of an entity can be indexed. Properties and attributes are not annotated by default, and therefore are ignored by the indexing process. The @Field annotation declares a property as indexed and allows the configuration of several aspects of the indexing process by setting one or more of the following attributes: name The name under which the property will be stored in the Lucene Document. By default, this attribute is the same as the property name, following the JavaBeans convention. store Specifies if the property is stored in the Lucene index. When a property is stored it can be retrieved in its original value from the Lucene Document. This is regardless of whether or not the element is indexed. Valid options are: Store.YES : Consumes more index space but allows projection. See Section 5.1.3.4, "Projection" Store.COMPRESS : Stores the property as compressed. This attribute consumes more CPU. Store.NO : No storage. This is the default setting for the store attribute. index Describes if property is indexed or not. The following values are applicable: Index.NO : No indexing is applied; cannot be found by querying. This setting is used for properties that are not required to be searchable, but are able to be projected. Index.YES : The element is indexed and is searchable. This is the default setting for the index attribute. analyze Determines if the property is analyzed. The analyze attribute allows a property to be searched by its contents. For example, it may be worthwhile to analyze a text field, whereas a date field does not need to be analyzed. Enable or disable the Analyze attribute using the following: Analyze.YES Analyze.NO The analyze attribute is enabled by default. The Analyze.YES setting requires the property to be indexed via the Index.YES attribute. The following attributes are used for sorting, and must not be analyzed. norms Determines whether or not to store index time boosting information. Valid settings are: Norms.YES Norms.NO The default for this attribute is Norms.YES . Disabling norms conserves memory, however no index time boosting information will be available. termVector Describes collections of term-frequency pairs. This attribute enables the storing of the term vectors within the documents during indexing. The default value is TermVector.NO . Available settings for this attribute are: TermVector.YES : Stores the term vectors of each document. This produces two synchronized arrays, one contains document terms and the other contains the term's frequency. TermVector.NO : Does not store term vectors. TermVector.WITH_OFFSETS : Stores the term vector and token offset information. This is the same as TermVector.YES plus it contains the starting and ending offset position information for the terms. TermVector.WITH_POSITIONS : Stores the term vector and token position information. This is the same as TermVector.YES plus it contains the ordinal positions of each occurrence of a term in a document. TermVector.WITH_POSITION_OFFSETS : Stores the term vector, token position and offset information. This is a combination of the YES , WITH_OFFSETS , and WITH_POSITIONS . indexNullAs By default, null values are ignored and not indexed. However, using indexNullAs permits specification of a string to be inserted as token for the null value. When using the indexNullAs parameter, use the same token in the search query to search for null value. Use this feature only with Analyze.NO . Valid settings for this attribute are: Field.DO_NOT_INDEX_NULL : This is the default value for this attribute. This setting indicates that null values will not be indexed. Field.DEFAULT_NULL_TOKEN : Indicates that a default null token is used. This default null token can be specified in the configuration using the default_null_token property. If this property is not set and Field.DEFAULT_NULL_TOKEN is specified, the string "_null_" will be used as default. Warning When implementing a custom FieldBridge or TwoWayFieldBridge it is up to the developer to handle the indexing of null values (see JavaDocs of LuceneOptions.indexNullAs() ). Report a bug 4.1.3. @NumericField The @NumericField annotation can be specified in the same scope as @Field . The @NumericField annotation can be specified for Integer, Long, Float, and Double properties. At index time the value will be indexed using a Trie structure. When a property is indexed as numeric field, it enables efficient range query and sorting, orders of magnitude faster than doing the same query on standard @Field properties. The @NumericField annotation accept the following optional parameters: forField : Specifies the name of the related @Field that will be indexed as numeric. It is mandatory when a property contains more than a @Field declaration. precisionStep : Changes the way that the Trie structure is stored in the index. Smaller precisionSteps lead to more disk space usage, and faster range and sort queries. Larger values lead to less space used, and range query performance closer to the range query in normal @Fields . The default value for precisionStep is 4. @NumericField supports only Double , Long , Integer , and Float . It is not possible to take any advantage from a similar functionality in Lucene for the other numeric types, therefore remaining types must use the string encoding via the default or custom TwoWayFieldBridge . Custom NumericFieldBridge can also be used. Custom configurations require approximation during type transformation. The following is an example defines a custom NumericFieldBridge . Example 4.2. Defining a custom NumericFieldBridge 23151%2C+Infinispan+Query+Guide-6.608-09-2016+09%3A23%3A32JBoss+Data+Grid+6Documentation6.6.1 Report a bug | [
"@Indexed public class Essay { }",
"public class BigDecimalNumericFieldBridge extends NumericFieldBridge { private static final BigDecimal storeFactor = BigDecimal.valueOf(100); @Override public void set(String name, Object value, Document document, LuceneOptions luceneOptions) { if (value != null) { BigDecimal decimalValue = (BigDecimal) value; Long indexedValue = Long.valueOf( decimalValue .multiply(storeFactor) .longValue()); luceneOptions.addNumericFieldToDocument(name, indexedValue, document); } } @Override public Object get(String name, Document document) { String fromLucene = document.get(name); BigDecimal storedBigDecimal = new BigDecimal(fromLucene); return storedBigDecimal.divide(storeFactor); } }"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/chap-mapping_domain_objects_to_the_index_structure |
Chapter 3. Preparing Storage for Red Hat Virtualization | Chapter 3. Preparing Storage for Red Hat Virtualization You need to prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Prerequisites Self-hosted engines must have an additional data domain with at least 74 GiB dedicated to the Manager virtual machine. The self-hosted engine installer creates this domain. Prepare the storage for this domain before installation. Warning Extending or otherwise changing the self-hosted engine storage domain after deployment of the self-hosted engine is not supported. Any such change might prevent the self-hosted engine from booting. When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine. If you use iSCSI storage, the self-hosted engine storage domain must use a dedicated iSCSI target. Any additional storage domains must use a different iSCSI target. It is strongly recommended to create additional data storage domains in the same data center as the self-hosted engine storage domain. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you cannot add new storage domains or remove the corrupted storage domain. You must redeploy the self-hosted engine. 3.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8. Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Prerequisites Install the NFS utils package. # dnf install nfs-utils -y To check the enabled versions: # cat /proc/fs/nfsd/versions Enable the following services: # systemctl enable nfs-server # systemctl enable rpcbind Procedure Create the group kvm : # groupadd kvm -g 36 Create the user vdsm in the group kvm : # useradd vdsm -u 36 -g kvm Create the storage directory and modify the access rights. Add the storage directory to /etc/exports with the relevant permissions. # vi /etc/exports # cat /etc/exports /storage *(rw) Restart the following services: # systemctl restart rpcbind # systemctl restart nfs-server To see which export are available for a specific IP address: # exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world> Note If changes in /etc/exports have been made after starting the services, the exportfs -ra command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable. 3.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8. Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } 3.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } } 3.4. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . 3.5. Customizing Multipath Configurations for SAN Vendors If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in /etc/multipath.conf . To override the multipath settings, do not customize /etc/multipath.conf . Because VDSM owns /etc/multipath.conf , installing or upgrading VDSM or Red Hat Virtualization can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures. Instead, you create a file in the /etc/multipath/conf.d directory that contains the settings you want to customize or override. VDSM executes the files in /etc/multipath/conf.d in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf . To avoid causing severe storage failures, follow these guidelines: Do not modify /etc/multipath.conf . If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems. Warning Not following these guidelines can cause catastrophic storage errors. Prerequisites VDSM is configured to use the multipath module. To verify this, enter: Procedure Create a new configuration file in the /etc/multipath/conf.d directory. Copy the individual setting you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/<my_device>.conf . Remove any comment marks, edit the setting values, and save your changes. Apply the new configuration settings by entering: Note Do not restart the multipathd service. Doing so generates errors in the VDSM logs. Verification steps Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections. Enable one connection at a time and verify that doing so makes the storage domain reachable. Additional resources Recommended Settings for Multipath.conf Red Hat Enterprise Linux DM Multipath Configuring iSCSI Multipathing How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? 3.6. Recommended Settings for Multipath.conf Do not override the following settings: user_friendly_names no Device names must be consistent across all hypervisors. For example, /dev/mapper/{WWID} . The default value of this setting, no , prevents the assignment of arbitrary and inconsistent device names such as /dev/mapper/mpath{N} on various hypervisors, which can lead to unpredictable system behavior. Warning Do not change this setting to user_friendly_names yes . User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported. find_multipaths no This setting controls whether RHVH tries to access devices through multipath only if more than one path is available. The current value, no , allows RHV to access devices through multipath even if only one path is available. Warning Do not override this setting. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner. Before backing up the Manager, ensure it is updated to the latest minor version. The Manager version in the backup file must match the version of the new Manager. | [
"dnf install nfs-utils -y",
"cat /proc/fs/nfsd/versions",
"systemctl enable nfs-server systemctl enable rpcbind",
"groupadd kvm -g 36",
"useradd vdsm -u 36 -g kvm",
"mkdir /storage chmod 0755 /storage chown 36:36 /storage/",
"vi /etc/exports cat /etc/exports /storage *(rw)",
"systemctl restart rpcbind systemctl restart nfs-server",
"exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>",
"cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }",
"cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } }",
"vdsm-tool is-configured --module multipath",
"systemctl reload multipathd"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/Preparing_Storage_for_RHV_migrating_to_SHE |
Chapter 29. domain | Chapter 29. domain This chapter describes the commands under the domain command. 29.1. domain create Create new domain Usage: Table 29.1. Positional Arguments Value Summary <domain-name> New domain name Table 29.2. Optional Arguments Value Summary -h, --help Show this help message and exit --description <description> New domain description --enable Enable domain (default) --disable Disable domain --or-show Return existing domain Table 29.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 29.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 29.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 29.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 29.2. domain delete Delete domain(s) Usage: Table 29.7. Positional Arguments Value Summary <domain> Domain(s) to delete (name or id) Table 29.8. Optional Arguments Value Summary -h, --help Show this help message and exit 29.3. domain list List domains Usage: Table 29.9. Optional Arguments Value Summary -h, --help Show this help message and exit Table 29.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 29.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 29.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 29.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 29.4. domain set Set domain properties Usage: Table 29.14. Positional Arguments Value Summary <domain> Domain to modify (name or id) Table 29.15. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> New domain name --description <description> New domain description --enable Enable domain --disable Disable domain 29.5. domain show Display domain details Usage: Table 29.16. Positional Arguments Value Summary <domain> Domain to display (name or id) Table 29.17. Optional Arguments Value Summary -h, --help Show this help message and exit Table 29.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 29.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 29.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 29.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack domain create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--enable | --disable] [--or-show] <domain-name>",
"openstack domain delete [-h] <domain> [<domain> ...]",
"openstack domain list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]",
"openstack domain set [-h] [--name <name>] [--description <description>] [--enable | --disable] <domain>",
"openstack domain show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <domain>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/domain |
34.6. Configuring Maps | 34.6. Configuring Maps Configuring maps not only creates the maps, it associates mount points through the keys and it assigns mount options that should be used when the directory is accessed. IdM supports both direct and indirect maps. Note Different clients can use different map sets. Map sets use a tree structure, so maps cannot be shared between locations. Important Identity Management does not set up or configure autofs. That must be done separately. Identity Management works with an existing autofs deployment. 34.6.1. Configuring Direct Maps Direct maps define exact locations, meaning absolute paths, to the file mount point. In the location entry, a direct map is identified by the preceding forward slash: 34.6.1.1. Configuring Direct Maps from the Web UI Click the Policy tab. Click the Automount subtab. Click name of the automount location to which to add the map. In the Automount Maps tab, click the + Add link to create a new map. In pop-up window, select the Direct radio button and enter the name of the new map. In the Automount Keys tab, click the + Add link to create a new key for the map. Enter the mount point. The key defines the actual mount point in the key name. The Info field sets the network location of the directory, as well as any mount options to use. Click the Add button to save the new key. 34.6.1.2. Configuring Direct Maps from the Command Line The key defines the actual mount point (in the key name) and any options. A map is a direct or indirect map based on the format of its key. Each location is created with an auto.direct item. The simplest configuration is to define a direct mapping by adding an automount key to the existing direct map entry. It is also possible to create different direct map entries. Add the key for the direct map to the location's auto.direct file. The --key option identifies the mount point, and --info gives the network location of the directory, as well as any mount options to use. For example: Mount options are described in the mount manpage, http://linux.die.net/man/8/mount . On Solaris, add the direct map and key using the ldapclient command to add the LDAP entry directly: 34.6.2. Configuring Indirect Maps An indirect map essentially specifies a relative path for maps. A parent entry sets the base directory for all of the indirect maps. The indirect map key sets a sub directory; whenever the indirect map location is loaded, the key is appended to that base directory. For example, if the base directory is /docs and the key is man , then the map is /docs/man . 34.6.2.1. Configuring Indirect Maps from the Web UI Click the Policy tab. Click the Automount subtab. Click name of the automount location to which to add the map. In the Automount Maps tab, click the + Add link to create a new map. In pop-up window, select the Indirect radio button and enter the required information for the indirect map: The name of the new map The mount point. The Mount field sets the base directory to use for all the indirect map keys. Optionally, a parent map. The default parent is auto.master , but if another map exists which should be used, that can be specified in the Parent Map field. Click the Add button to save the new key. 34.6.2.2. Configuring Indirect Maps from the Command Line The primary difference between a direct map and an indirect map is that there is no forward slash in front of an indirect key. Create an indirect map to set the base entry using the automountmap-add-indirect command. The --mount option sets the base directory to use for all the indirect map keys. The default parent entry is auto.master , but if another map exists which should be used, that can be specified using the --parentmap option. For example: Add the indirect key for the mount location: To verify the configuration, check the location file list using automountlocation-tofiles : On Solaris, add the indirect map using the ldapclient command to add the LDAP entry directly: 34.6.3. Importing Automount Maps If there are existing automount maps, these can be imported into the IdM automount configuration. The only required information is the IdM automount location and the full path and name of the map file. The --continuous option tells the automountlocation-import command to continue through the map file, even if the command encounters errors. For example: | [
"--------------------------- /etc/auto.direct: /shared/man server.example.com:/shared/man",
"ipa automountkey-add raleigh auto.direct --key=/share --info=\"ro,soft,ipaserver.example.com:/home/share\" Key: /share Mount information: ro,soft,ipaserver.example.com:/home/share",
"ldapclient -a serviceSearchDescriptor=auto_direct:automountMapName=auto.direct,cn= location ,cn=automount,dc=example,dc=com?one",
"--------------------------- /etc/auto.share: man ipa.example.com:/docs/man ---------------------------",
"ipa automountmap-add-indirect location mapName --mount= directory [--parentmap= mapName ]",
"ipa automountmap-add-indirect raleigh auto.share --mount=/share -------------------------------- Added automount map \"auto.share\" --------------------------------",
"ipa automountkey-add raleigh auto.share --key=docs --info=\"ipa.example.com:/export/docs\" ------------------------- Added automount key \"docs\" ------------------------- Key: docs Mount information: ipa.example.com:/export/docs",
"ipa automountlocation-tofiles raleigh /etc/auto.master: /- /etc/auto.direct /share /etc/auto.share --------------------------- /etc/auto.direct: --------------------------- /etc/auto.share: man ipa.example.com:/export/docs",
"ldapclient -a serviceSearchDescriptor=auto_share:automountMapName=auto.share,cn= location ,cn=automount,dc=example,dc=com?one",
"ipa automountlocation-import location map_file [--continuous]",
"ipa automountlocation-import raleigh /etc/custom.map"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/configuring-maps |
Chapter 11. Monitoring bare-metal events with the Bare Metal Event Relay | Chapter 11. Monitoring bare-metal events with the Bare Metal Event Relay Important Bare Metal Event Relay is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 11.1. About bare-metal events Use the Bare Metal Event Relay to subscribe applications that run in your OpenShift Container Platform cluster to events that are generated on the underlying bare-metal host. The Redfish service publishes events on a node and transmits them on an advanced message queue to subscribed applications. Bare-metal events are based on the open Redfish standard that is developed under the guidance of the Distributed Management Task Force (DMTF). Redfish provides a secure industry-standard protocol with a REST API. The protocol is used for the management of distributed, converged or software-defined resources and infrastructure. Hardware-related events published through Redfish includes: Breaches of temperature limits Server status Fan status Begin using bare-metal events by deploying the Bare Metal Event Relay Operator and subscribing your application to the service. The Bare Metal Event Relay Operator installs and manages the lifecycle of the Redfish bare-metal event service. Note The Bare Metal Event Relay works only with Redfish-capable devices on single-node clusters provisioned on bare-metal infrastructure. 11.2. How bare-metal events work The Bare Metal Event Relay enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. These hardware events are delivered over a reliable low-latency transport channel based on Advanced Message Queuing Protocol (AMQP). The latency of the messaging service is between 10 to 20 milliseconds. The Bare Metal Event Relay provides a publish-subscribe service for the hardware events, where multiple applications can use REST APIs to subscribe and consume the events. The Bare Metal Event Relay supports hardware that complies with Redfish OpenAPI v1.8 or higher. 11.2.1. Bare Metal Event Relay data flow The following figure illustrates an example of bare-metal events data flow: Figure 11.1. Bare Metal Event Relay data flow 11.2.1.1. Operator-managed pod The Operator uses custom resources to manage the pod containing the Bare Metal Event Relay and its components using the HardwareEvent CR. 11.2.1.2. Bare Metal Event Relay At startup, the Bare Metal Event Relay queries the Redfish API and downloads all the message registries, including custom registries. The Bare Metal Event Relay then begins to receive subscribed events from the Redfish hardware. The Bare Metal Event Relay enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. The events are reported using the HardwareEvent CR. 11.2.1.3. Cloud native event Cloud native events (CNE) is a REST API specification for defining the format of event data. 11.2.1.4. CNCF CloudEvents CloudEvents is a vendor-neutral specification developed by the Cloud Native Computing Foundation (CNCF) for defining the format of event data. 11.2.1.5. AMQP dispatch router The dispatch router is responsible for the message delivery service between publisher and subscriber. AMQP 1.0 qpid is an open standard that supports reliable, high-performance, fully-symmetrical messaging over the internet. 11.2.1.6. Cloud event proxy sidecar The cloud event proxy sidecar container image is based on the ORAN API specification and provides a publish-subscribe event framework for hardware events. 11.2.2. Redfish message parsing service In addition to handling Redfish events, the Bare Metal Event Relay provides message parsing for events without a Message property. The proxy downloads all the Redfish message registries including vendor specific registries from the hardware when it starts. If an event does not contain a Message property, the proxy uses the Redfish message registries to construct the Message and Resolution properties and add them to the event before passing the event to the cloud events framework. This service allows Redfish events to have smaller message size and lower transmission latency. 11.2.3. Installing the Bare Metal Event Relay using the CLI As a cluster administrator, you can install the Bare Metal Event Relay Operator by using the CLI. Prerequisites A cluster that is installed on bare-metal hardware with nodes that have a RedFish-enabled Baseboard Management Controller (BMC). Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the Bare Metal Event Relay. Save the following YAML in the bare-metal-events-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: "true" Create the Namespace CR: USD oc create -f bare-metal-events-namespace.yaml Create an Operator group for the Bare Metal Event Relay Operator. Save the following YAML in the bare-metal-events-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events Create the OperatorGroup CR: USD oc create -f bare-metal-events-operatorgroup.yaml Subscribe to the Bare Metal Event Relay. Save the following YAML in the bare-metal-events-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: "stable" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR: USD oc create -f bare-metal-events-sub.yaml Verification To verify that the Bare Metal Event Relay Operator is installed, run the following command: USD oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase bare-metal-event-relay.4.10.0-202206301927 Succeeded 11.2.4. Installing the Bare Metal Event Relay using the web console As a cluster administrator, you can install the Bare Metal Event Relay Operator using the web console. Prerequisites A cluster that is installed on bare-metal hardware with nodes that have a RedFish-enabled Baseboard Management Controller (BMC). Log in as a user with cluster-admin privileges. Procedure Install the Bare Metal Event Relay using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Bare Metal Event Relay from the list of available Operators, and then click Install . On the Install Operator page, select or create a Namespace , select openshift-bare-metal-events , and then click Install . Verification Optional: You can verify that the Operator installed successfully by performing the following check: Switch to the Operators Installed Operators page. Ensure that Bare Metal Event Relay is listed in the project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the project namespace. 11.3. Installing the AMQ messaging bus To pass Redfish bare-metal event notifications between publisher and subscriber on a node, you must install and configure an AMQ messaging bus to run locally on the node. You do this by installing the AMQ Interconnect Operator for use in the cluster. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Install the AMQ Interconnect Operator to its own amq-interconnect namespace. See Installing the AMQ Interconnect Operator . Verification Verify that the AMQ Interconnect Operator is available and the required pods are running: USD oc get pods -n amq-interconnect Example output NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h Verify that the required bare-metal-event-relay bare-metal event producer pod is running in the openshift-bare-metal-events namespace: USD oc get pods -n openshift-bare-metal-events Example output NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s 11.4. Subscribing to Redfish BMC bare-metal events for a cluster node As a cluster administrator, you can subscribe to Redfish BMC events generated on a node in your cluster by creating a BMCEventSubscription custom resource (CR) for the node, creating a HardwareEvent CR for the event, and a Secret CR for the BMC. 11.4.1. Subscribing to bare-metal events You can configure the baseboard management controller (BMC) to send bare-metal events to subscribed applications running in an OpenShift Container Platform cluster. Example Redfish bare-metal events include an increase in device temperature, or removal of a device. You subscribe applications to bare-metal events using a REST API. Important You can only create a BMCEventSubscription custom resource (CR) for physical hardware that supports Redfish and has a vendor interface set to redfish or idrac-redfish . Note Use the BMCEventSubscription CR to subscribe to predefined Redfish events. The Redfish standard does not provide an option to create specific alerts and thresholds. For example, to receive an alert event when an enclosure's temperature exceeds 40deg Celsius, you must manually configure the event according to the vendor's recommendations. Perform the following procedure to subscribe to bare-metal events for the node using a BMCEventSubscription CR. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Get the user name and password for the BMC. Deploy a bare-metal node with a Redfish-enabled Baseboard Management Controller (BMC) in your cluster, and enable Redfish events on the BMC. Note Enabling Redfish events on specific hardware is outside the scope of this information. For more information about enabling Redfish events for your specific hardware, consult the BMC manufacturer documentation. Procedure Confirm that the node hardware has the Redfish EventService enabled by running the following curl command: curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u "<bmc_username>:<password>" where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output { "@odata.context": "/redfish/v1/USDmetadata#EventService.EventService", "@odata.id": "/redfish/v1/EventService", "@odata.type": "#EventService.v1_0_2.EventService", "Actions": { "#EventService.SubmitTestEvent": { "[email protected]": ["StatusChange", "ResourceUpdated", "ResourceAdded", "ResourceRemoved", "Alert"], "target": "/redfish/v1/EventService/Actions/EventService.SubmitTestEvent" } }, "DeliveryRetryAttempts": 3, "DeliveryRetryIntervalSeconds": 30, "Description": "Event Service represents the properties for the service", "EventTypesForSubscription": ["StatusChange", "ResourceUpdated", "ResourceAdded", "ResourceRemoved", "Alert"], "[email protected]": 5, "Id": "EventService", "Name": "Event Service", "ServiceEnabled": true, "Status": { "Health": "OK", "HealthRollup": "OK", "State": "Enabled" }, "Subscriptions": { "@odata.id": "/redfish/v1/EventService/Subscriptions" } } Get the Bare Metal Event Relay service route for the cluster by running the following command: USD oc get route -n openshift-bare-metal-events Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None Create a BMCEventSubscription resource to subscribe to the Redfish events: Save the following YAML in the bmc_sub.yaml file: apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: '' 1 Specifies the name or UUID of the worker node where the Redfish events are generated. 2 Specifies the bare-metal event proxy service, for example, https://hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com/webhook . Create the BMCEventSubscription CR: USD oc create -f bmc_sub.yaml Optional: To delete the BMC event subscription, run the following command: USD oc delete -f bmc_sub.yaml Optional: To manually create a Redfish event subscription without creating a BMCEventSubscription CR, run the following curl command, specifying the BMC username and password. USD curl -i -k -X POST -H "Content-Type: application/json" -d '{"Destination": "https://<proxy_service_url>", "Protocol" : "Redfish", "EventTypes": ["Alert"], "Context": "root"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v where: proxy_service_url is the bare-metal event proxy service, for example, https://hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com/webhook . bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: "1651135676" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT 11.4.2. Querying Redfish bare-metal event subscriptions with curl Some hardware vendors limit the amount of Redfish hardware event subscriptions. You can query the number of Redfish event subscriptions by using curl . Prerequisites Get the user name and password for the BMC. Deploy a bare-metal node with a Redfish-enabled Baseboard Management Controller (BMC) in your cluster, and enable Redfish hardware events on the BMC. Procedure Check the current subscriptions for the BMC by running the following curl command: USD curl --globoff -H "Content-Type: application/json" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { "@odata.context": "/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection", "@odata.etag": "" 1651137375 "", "@odata.id": "/redfish/v1/EventService/Subscriptions", "@odata.type": "#EventDestinationCollection.EventDestinationCollection", "Description": "Collection for Event Subscriptions", "Members": [ { "@odata.id": "/redfish/v1/EventService/Subscriptions/1" }], "[email protected]": 1, "Name": "Event Subscriptions Collection" } In this example, a single subscription is configured: /redfish/v1/EventService/Subscriptions/1 . Optional: To remove the /redfish/v1/EventService/Subscriptions/1 subscription with curl , run the following command, specifying the BMC username and password: USD curl --globoff -L -w "%{http_code} %{url_effective}\n" -k -u <bmc_username>:<password >-H "Content-Type: application/json" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1 where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. 11.4.3. Creating the bare-metal event and Secret CRs To start using bare-metal events, create the HardwareEvent custom resource (CR) for the host where the Redfish hardware is present. Hardware events and faults are reported in the hw-event-proxy logs. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the Bare Metal Event Relay. Create a BMCEventSubscription CR for the BMC Redfish hardware. Note Multiple HardwareEvent resources are not permitted. Procedure Create the HardwareEvent custom resource (CR): Save the following YAML in the hw-event.yaml file: apiVersion: "event.redhat-cne.org/v1alpha1" kind: "HardwareEvent" metadata: name: "hardware-event" spec: nodeSelector: node-role.kubernetes.io/hw-event: "" 1 transportHost: "amqp://amq-router-service-name.amq-namespace.svc.cluster.local" 2 logLevel: "debug" 3 msgParserTimeout: "10" 4 1 Required. Use the nodeSelector field to target nodes with the specified label, for example, node-role.kubernetes.io/hw-event: "" . 2 Required. AMQP host that delivers the events at the transport layer using the AMQP protocol. 3 Optional. The default value is debug . Sets the log level in hw-event-proxy logs. The following log levels are available: fatal , error , warning , info , debug , trace . 4 Optional. Sets the timeout value in milliseconds for the Message Parser. If a message parsing request is not responded to within the timeout duration, the original hardware event message is passed to the cloud native event framework. The default value is 10. Create the HardwareEvent CR: USD oc create -f hardware-event.yaml Create a BMC username and password Secret CR that enables the hardware events proxy to access the Redfish message registry for the bare-metal host. Save the following YAML in the hw-event-bmc-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address> 1 Enter plain text values for the various items under stringData . Create the Secret CR: USD oc create -f hw-event-bmc-secret.yaml 11.5. Subscribing applications to bare-metal events REST API reference Use the bare-metal events REST API to subscribe an application to the bare-metal events that are generated on the parent node. Subscribe applications to Redfish events by using the resource address /cluster/node/<node_name>/redfish/event , where <node_name> is the cluster node running the application. Deploy your cloud-event-consumer application container and cloud-event-proxy sidecar container in a separate application pod. The cloud-event-consumer application subscribes to the cloud-event-proxy container in the application pod. Use the following API endpoints to subscribe the cloud-event-consumer application to Redfish events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the application pod: /api/ocloudNotifications/v1/subscriptions POST : Creates a new subscription GET : Retrieves a list of subscriptions /api/ocloudNotifications/v1/subscriptions/<subscription_id> GET : Returns details for the specified subscription ID api/ocloudNotifications/v1/subscriptions/status/<subscription_id> PUT : Creates a new status ping request for the specified subscription ID /api/ocloudNotifications/v1/health GET : Returns the health status of ocloudNotifications API Note 9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your application as required. api/ocloudNotifications/v1/subscriptions HTTP method GET api/ocloudNotifications/v1/subscriptions Description Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions. Example API response [ { "id": "ca11ab76-86f9-428c-8d3a-666c24e34d32", "endpointUri": "http://localhost:9089/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32", "resource": "/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } ] HTTP method POST api/ocloudNotifications/v1/subscriptions Description Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned. Table 11.1. Query parameters Parameter Type subscription data Example payload { "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions", "resource": "/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } api/ocloudNotifications/v1/subscriptions/<subscription_id> HTTP method GET api/ocloudNotifications/v1/subscriptions/<subscription_id> Description Returns details for the subscription with ID <subscription_id> Table 11.2. Query parameters Parameter Type <subscription_id> string Example API response { "id":"ca11ab76-86f9-428c-8d3a-666c24e34d32", "endpointUri":"http://localhost:9089/api/ocloudNotifications/v1/dummy", "uriLocation":"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32", "resource":"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } api/ocloudNotifications/v1/subscriptions/status/<subscription_id> HTTP method PUT api/ocloudNotifications/v1/subscriptions/status/<subscription_id> Description Creates a new status ping request for subscription with ID <subscription_id> . If a subscription is present, the status request is successful and a 202 Accepted status code is returned. Table 11.3. Query parameters Parameter Type <subscription_id> string Example API response {"status":"ping sent"} api/ocloudNotifications/v1/health/ HTTP method GET api/ocloudNotifications/v1/health/ Description Returns the health status for the ocloudNotifications REST API. Example API response OK | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: \"true\"",
"oc create -f bare-metal-events-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events",
"oc create -f bare-metal-events-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: \"stable\" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f bare-metal-events-sub.yaml",
"oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase bare-metal-event-relay.4.10.0-202206301927 Succeeded",
"oc get pods -n amq-interconnect",
"NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h",
"oc get pods -n openshift-bare-metal-events",
"NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s",
"curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u \"<bmc_username>:<password>\"",
"{ \"@odata.context\": \"/redfish/v1/USDmetadata#EventService.EventService\", \"@odata.id\": \"/redfish/v1/EventService\", \"@odata.type\": \"#EventService.v1_0_2.EventService\", \"Actions\": { \"#EventService.SubmitTestEvent\": { \"[email protected]\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"target\": \"/redfish/v1/EventService/Actions/EventService.SubmitTestEvent\" } }, \"DeliveryRetryAttempts\": 3, \"DeliveryRetryIntervalSeconds\": 30, \"Description\": \"Event Service represents the properties for the service\", \"EventTypesForSubscription\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"[email protected]\": 5, \"Id\": \"EventService\", \"Name\": \"Event Service\", \"ServiceEnabled\": true, \"Status\": { \"Health\": \"OK\", \"HealthRollup\": \"OK\", \"State\": \"Enabled\" }, \"Subscriptions\": { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\" } }",
"oc get route -n openshift-bare-metal-events",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None",
"apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: ''",
"oc create -f bmc_sub.yaml",
"oc delete -f bmc_sub.yaml",
"curl -i -k -X POST -H \"Content-Type: application/json\" -d '{\"Destination\": \"https://<proxy_service_url>\", \"Protocol\" : \"Redfish\", \"EventTypes\": [\"Alert\"], \"Context\": \"root\"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v",
"HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: \"1651135676\" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT",
"curl --globoff -H \"Content-Type: application/json\" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { \"@odata.context\": \"/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection\", \"@odata.etag\": \"\" 1651137375 \"\", \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\", \"@odata.type\": \"#EventDestinationCollection.EventDestinationCollection\", \"Description\": \"Collection for Event Subscriptions\", \"Members\": [ { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions/1\" }], \"[email protected]\": 1, \"Name\": \"Event Subscriptions Collection\" }",
"curl --globoff -L -w \"%{http_code} %{url_effective}\\n\" -k -u <bmc_username>:<password >-H \"Content-Type: application/json\" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1",
"apiVersion: \"event.redhat-cne.org/v1alpha1\" kind: \"HardwareEvent\" metadata: name: \"hardware-event\" spec: nodeSelector: node-role.kubernetes.io/hw-event: \"\" 1 transportHost: \"amqp://amq-router-service-name.amq-namespace.svc.cluster.local\" 2 logLevel: \"debug\" 3 msgParserTimeout: \"10\" 4",
"oc create -f hardware-event.yaml",
"apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address>",
"oc create -f hw-event-bmc-secret.yaml",
"[ { \"id\": \"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\": \"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" } ]",
"{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }",
"{ \"id\":\"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\":\"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\":\"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }",
"{\"status\":\"ping sent\"}",
"OK"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/monitoring/using-rfhe |
Chapter 9. Upgrading RHACS Cloud Service | Chapter 9. Upgrading RHACS Cloud Service 9.1. Upgrading secured clusters in RHACS Cloud Service by using the Operator Red Hat provides regular service updates for the components that it manages, including Central services. These service updates include upgrades to new versions of Red Hat Advanced Cluster Security Cloud Service. You must regularly upgrade the version of RHACS on your secured clusters to ensure compatibility with RHACS Cloud Service. 9.1.1. Preparing to upgrade Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps: If the cluster you are upgrading contains the SecuredCluster custom resource (CR), change the collection method to CORE_BPF . For more information, see "Changing the collection method". 9.1.1.1. Changing the collection method If the cluster that you are upgrading contains the SecuredCluster CR, you must ensure that the per node collection setting is set to CORE_BPF before you upgrade. Procedure In the OpenShift Container Platform web console, go to the RHACS Operator page. In the top navigation menu, select Secured Cluster . Click the instance name, for example, stackrox-secured-cluster-services . Use one of the following methods to change the setting: In the Form view , under Per Node Settings Collector Settings Collection , select CORE_BPF . Click YAML to open the YAML editor and locate the spec.perNode.collector.collection attribute. If the value is KernelModule or EBPF , then change it to CORE_BPF . Click Save. Additional resources Updating installed Operators 9.1.2. Rolling back an Operator upgrade for secured clusters To roll back an Operator upgrade, you can use either the CLI or the OpenShift Container Platform web console. Note On secured clusters, rolling back Operator upgrades is needed only in rare cases, for example, if an issue exists with the secured cluster. 9.1.2.1. Rolling back an Operator upgrade by using the CLI You can roll back the Operator version by using CLI commands. Procedure Delete the OLM subscription by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete subscription rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete subscription rhacs-operator Delete the cluster service version (CSV) by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator Install the latest version of the Operator on the rolled back channel. 9.1.2.2. Rolling back an Operator upgrade by using the web console You can roll back the Operator version by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Go to the Operators Installed Operators page. Click the RHACS Operator. On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates. Install the latest version of the Operator on the rolled back channel. Additional resources Operator Lifecycle Manager workflow Manually approving a pending Operator update 9.1.3. Troubleshooting Operator upgrade issues Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator. 9.1.3.1. Central or Secured cluster fails to deploy When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue: If the Operator fails to deploy Secured Cluster If the Operator fails to apply CR changes to actual resources For Secured clusters, run the following command to check the conditions: USD oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1 1 If you use Kubernetes, enter kubectl instead of oc . You can identify configuration errors from the conditions output: Example output Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed Additionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs: oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1 1 If you use Kubernetes, enter kubectl instead of oc . 9.2. Upgrading secured clusters in RHACS Cloud Service by using Helm charts You can upgrade your secured clusters in RHACS Cloud Service by using Helm charts. If you installed RHACS secured clusters by using Helm charts, you can upgrade to the latest version of RHACS by updating the Helm chart and running the helm upgrade command. 9.2.1. Updating the Helm chart repository You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes. Prerequisites You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository. You must be using Helm version 3.8.3 or newer. Procedure Update Red Hat Advanced Cluster Security for Kubernetes charts repository. USD helm repo update Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 9.2.2. Running the Helm upgrade command You can use the helm upgrade command to update Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites You must have access to the values-private.yaml configuration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS). Otherwise, you must generate the values-private.yaml configuration file containing root certificates before proceeding with these commands. Procedure Run the helm upgrade command and specify the configuration files by using the -f option: USD helm upgrade -n stackrox stackrox-secured-cluster-services \ rhacs/secured-cluster-services --version <current-rhacs-version> \ 1 -f values-private.yaml 1 Use the -f option to specify the paths for your YAML configuration files. 9.2.3. Additional resources Installing RHACS Cloud Service on secured clusters by using Helm charts 9.3. Manually upgrading secured clusters in RHACS Cloud Service by using the roxctl CLI You can upgrade your secured clusters in RHACS Cloud Service by using the roxctl CLI. Important You need to manually upgrade secured clusters only if you used the roxctl CLI to install the secured clusters. 9.3.1. Upgrading the roxctl CLI To upgrade the roxctl CLI to the latest version, you must uninstall your current version of the roxctl CLI and then install the latest version of the roxctl CLI. 9.3.1.1. Uninstalling the roxctl CLI You can uninstall the roxctl CLI binary on Linux by using the following procedure. Procedure Find and delete the roxctl binary: USD ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1 1 Depending on your environment, you might need administrator rights to delete the roxctl binary. 9.3.1.2. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 9.3.1.3. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 9.3.1.4. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 9.3.2. Upgrading all secured clusters manually Important To ensure optimal functionality, use the same RHACS version for your secured clusters that RHACS Cloud Service is running. If you are using automatic upgrades, update all your secured clusters by using automatic upgrades. If you are not using automatic upgrades, complete the instructions in this section on all secured clusters. To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow these instructions. 9.3.2.1. Updating other images You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades. Note If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure. Procedure Update the Sensor image: USD oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Compliance image: USD oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Collector image: USD oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.5.6 1 1 If you use Kubernetes, enter kubectl instead of oc . Note If you are using the collector slim image, run the following command instead: USD oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version} Update the admission control image: USD oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6 Important If you have installed RHACS on Red Hat OpenShift by using the roxctl CLI, you need to migrate the security context constraints (SCCs). For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section. Additional resources Authenticating by using the roxctl CLI 9.3.2.2. Migrating SCCs during the manual upgrade By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters. Procedure List all of the RHACS services that are deployed on all secured clusters: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Example output Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control #... Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector #... Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh #... Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #... In this example, you can see that each pod has its own custom SCC, which is specified through the openshift.io/scc field. Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs. To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps: Create a file named upgrade-scs.yaml that defines the role and role binding resources by using the following content: Example 9.1. Example YAML file apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - - 1 The type of Kubernetes resource, in this example, Role . 2 The name of the role resource. 3 The namespace in which the role is created. 4 Describes the permissions granted by the role resource. 5 The type of Kubernetes resource, in this example, RoleBinding . 6 The name of the role binding resource. 7 Specifies the role to bind in the same namespace. 8 Specifies the subjects that are bound to the role. Create the role and role binding resources specified in the upgrade-scs.yaml file by running the following command: USD oc -n stackrox create -f ./update-scs.yaml Important You must run this command on each secured cluster to create the role and role bindings specified in the upgrade-scs.yaml file. Delete the SCCs that are specific to RHACS: To delete the SCCs that are specific to all secured clusters, run the following command: USD oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor Important You must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster. Verification Ensure that all the pods are using the correct SCCs by running the following command: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Compare the output with the following table: Component custom SCC New Red Hat OpenShift 4 SCC Central stackrox-central nonroot-v2 Central-db stackrox-central-db nonroot-v2 Scanner stackrox-scanner nonroot-v2 Scanner-db stackrox-scanner nonroot-v2 Admission Controller stackrox-admission-control restricted-v2 Collector stackrox-collector privileged Sensor stackrox-sensor restricted-v2 9.3.2.2.1. Editing the GOMEMLIMIT environment variable for the Sensor deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Sensor deployment: USD oc -n stackrox edit deploy/sensor 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.2. Editing the GOMEMLIMIT environment variable for the Collector deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Collector deployment: USD oc -n stackrox edit deploy/collector 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.3. Editing the GOMEMLIMIT environment variable for the Admission Controller deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Admission Controller deployment: USD oc -n stackrox edit deploy/admission-control 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.4. Verifying secured cluster upgrade After you have upgraded secured clusters, verify that the updated pods are working. Procedure Check that the new pods have deployed: USD oc get deploy,ds -n stackrox -o wide 1 1 If you use Kubernetes, enter kubectl instead of oc . USD oc get pod -n stackrox --watch 1 1 If you use Kubernetes, enter kubectl instead of oc . 9.3.3. Enabling RHCOS node scanning If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the Secured cluster, you must have installed Secured cluster on OpenShift Container Platform 4.11 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . Procedure Run one of the following commands to update the compliance container. For a default compliance container with metrics disabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' For a compliance container with Prometheus metrics enabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' Update the Collector DaemonSet (DS) by taking the following steps: Add new volume mounts to Collector DS by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}' Add the new NodeScanner container by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.5.6","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}' Additional resources Scanning RHCOS node hosts | [
"oc -n rhacs-operator delete subscription rhacs-operator",
"kubectl -n rhacs-operator delete subscription rhacs-operator",
"oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1",
"Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed",
"-n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1",
"helm repo update",
"helm search repo -l rhacs/",
"helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --version <current-rhacs-version> \\ 1 -f values-private.yaml",
"ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6 1",
"oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6 1",
"oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.5.6 1",
"oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version}",
"oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.5.6",
"oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'",
"Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control # Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector # Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh # Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - -",
"oc -n stackrox create -f ./update-scs.yaml",
"oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor",
"oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'",
"oc -n stackrox edit deploy/sensor 1",
"oc -n stackrox edit deploy/collector 1",
"oc -n stackrox edit deploy/admission-control 1",
"oc get deploy,ds -n stackrox -o wide 1",
"oc get pod -n stackrox --watch 1",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.5.6\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/rhacs_cloud_service/upgrading-rhacs-cloud-service |
Chapter 3. Rebooting Compute nodes | Chapter 3. Rebooting Compute nodes You can reboot your Compute nodes any time after you complete the minor update. You check which updated nodes require a reboot first, and then specify them in an OpenStackDatPlaneDeployment custom resource (CR) to start the reboot. Until after the reboot, your environment still uses the old kernel and Open vSwitch (OVS) for data plane development kit (DPDK) implementations. To ensure minimal downtime of instances in your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you should migrate the instances from the Compute node that you need to reboot. Prerequisites You have decided whether to migrate instances to another Compute node before you start the reboot. Note If you have a Multi-RHEL environment, and you want to migrate virtual machines from a Compute node that is running RHEL 9.4 to a Compute node that is running RHEL 9.2, only cold migration is supported. If you cannot migrate the instances, you can set the shutdown_timeout configuration option to control the state of the instances after the Compute node reboots. This option determines the number of seconds to wait for an instance to perform a clean shutdown. The default value is 60 . Procedure Confirm which updated nodes need a reboot: Replace <deployment_name> with the name of the deployment that includes your Compute nodes. Replace <nodeSet_name> with the names of the node sets that you need to check. The command shows the following output if a reboot is required: Reboot is required but was not started. Edpm_reboot_strategy is set to never or this is already deployed machine. Reboot has to be planned. To start reboot set edpm_reboot_strategy to force . Open a remote shell connection to the OpenStackClient pod: Retrieve a list of your Compute nodes to identify the host name of the nodes that require a reboot: Disable the Compute service on the Compute node that you need to reboot: Replace <hostname> with the host name of the Compute node on which you are disabling the service. List all instances on the Compute node: USD openstack server list --host <hostname> --all-projects Optional: If you decide to migrate the instances to another Compute node, for example, if you plan to reboot nodes that include running workloads, run the following command: USD openstack server migrate --live-migration --host <target_host> --wait <instance_id> Replace <instance_id> with your instance ID. Replace <target_host> with the host that you are migrating the instance to. Wait until migration completes. Confirm that the migration was successful: USD openstack server list --host <hostname> --all-projects Continue to migrate instances until none remain on the Compute node. Exit the OpenStackClient pod: Create an OpenStackDataPlaneDeployment CR to reboot the nodes: apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-ipam-reboot namespace: openstack spec: nodeSets: 1 - <nodeSet_name> servicesOverride: 2 - reboot-os ansibleExtraVars: 3 edpm_reboot_strategy: force ansibleLimit: <node_hostname>,...,<node_hostname> 4 1 Lists the OpenStackDataPlaneNodeSet CRs that contain the nodes that you are rebooting. 2 Specifies the reboot-os as the only service to execute. 3 Reboots all the nodes in the node set at the same time. 4 Optional: Lists the individual nodes in the node set to reboot. If not set, all the nodes in the node set are rebooted at the same time. Verify that the openstack-edpm-ipam-reboot deployment completed: If the deployment fails, see Troubleshooting data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide. Re-enable the Compute node: USD oc rsh openstackclient -n openstack USD openstack compute service set <hostname> nova-compute --enable Check that the Compute node is enabled: USD openstack compute service list | [
"oc logs jobs/reboot-os-<deployment_name>-<nodeSet_name>",
"oc rsh -n openstack openstackclient",
"openstack compute service list",
"openstack compute service set <hostname> nova-compute --disable",
"openstack server list --host <hostname> --all-projects",
"openstack server migrate --live-migration --host <target_host> --wait <instance_id>",
"openstack server list --host <hostname> --all-projects",
"exit",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-ipam-reboot namespace: openstack spec: nodeSets: 1 - <nodeSet_name> servicesOverride: 2 - reboot-os ansibleExtraVars: 3 edpm_reboot_strategy: force ansibleLimit: <node_hostname>,...,<node_hostname> 4",
"oc get openstackdataplanedeployment NAME STATUS MESSAGE openstack-edpm-deployment-ipam-reboot True Setup complete",
"oc rsh openstackclient -n openstack openstack compute service set <hostname> nova-compute --enable",
"openstack compute service list"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/updating_your_environment_to_the_latest_maintenance_release/proc_rebooting-compute-nodes_perform-update |
Chapter 42. Kernel | Chapter 42. Kernel Heterogeneous memory management included as a Technology Preview Red Hat Enterprise Linux 7.3 offers the heterogeneous memory management (HMM) feature as a Technology Preview. This feature has been added to the kernel as a helper layer for devices that want to mirror a process address space into their own memory management unit (MMU). Thus a non-CPU device processor is able to read system memory using the unified system address space. To enable this feature, add experimental_hmm=enable to the kernel command line. (BZ#1230959) User namespace This feature provides additional security to servers running Linux containers by providing better isolation between the host and the containers. Administrators of a container are no longer able to perform administrative operations on the host, which increases security. (BZ#1138782) libocrdma RoCE support on Oce141xx cards As a Technology Preview, the ocrdma module and the libocrdma package support the Remote Direct Memory Access over Converged Ethernet (RoCE) functionality on all network adapters in the Oce141xx family. (BZ#1334675) No-IOMMU mode for VFIO drivers As a Technology Preview, this update adds No-IOMMU mode for virtual function I/O (VFIO) drivers. The No-IOMMU mode provides the user with full user-space I/O (UIO) access to a direct memory access (DMA)-capable device without a I/O memory management unit (IOMMU). Note that in addition to not being supported, using this mode is not secure due to the lack of I/O management provided by IOMMU. (BZ# 1299662 ) criu rebased to version 2.3 Red Hat Enterprise Linux 7.2 introduced the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space (CRIU) , which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state. Note that the criu tool depends on Protocol Buffers , a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, were also introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. With Red Hat Enterprise Linux 7.3, the criu packages have been upgraded to upstream version 2.3, which provides a number of bug fixes and enhancements over the version. Notably, criu is now available also on Red Hat Enterprise Linux for POWER, little endian. Additionally, criu can now be used for following applications running in a Red Hat Enterprise Linux 7 runc container: vsftpd apache httpd sendmail postgresql mongodb mariadb mysql tomcat dnsmasq (BZ# 1296578 ) The ibmvnic Device Driver has been added The ibmvnic Device Driver has been introduced as a Technology Preview in Red Hat Enterprise Linux 7.3 for IBM POWER architectures. vNIC (Virtual Network Interface Controller) is a new PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. (BZ#947163) Kexec as a Technology Preview The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot. (BZ#1460849) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/technology_previews_kernel |
Chapter 7. Managing Cluster Resources | Chapter 7. Managing Cluster Resources This chapter describes various commands you can use to manage cluster resources. It provides information on the following procedures. Section 7.1, "Manually Moving Resources Around the Cluster" Section 7.2, "Moving Resources Due to Failure" Section 7.4, "Enabling, Disabling, and Banning Cluster Resources" Section 7.5, "Disabling a Monitor Operations" 7.1. Manually Moving Resources Around the Cluster You can override the cluster and force resources to move from their current location. There are two occasions when you would want to do this: When a node is under maintenance, and you need to move all resources running on that node to a different node When a single resource needs to be moved To move all resources running on a node to a different node, you put the node in standby mode. For information on putting a cluster node in standby node, see Section 3.2.4, "Standby Mode" . You can move individually specified resources in either of the following ways. You can use the pcs resource move command to move a resource off a node on which it is currently running, as described in Section 7.1.1, "Moving a Resource from its Current Node" . You can use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. For information on this command, see Section 7.1.2, "Moving a Resource to its Preferred Node" . 7.1.1. Moving a Resource from its Current Node To move a resource off the node on which it is currently running, use the following command, specifying the resource_id of the node as defined. Specify the destination_node . if you want to indicate on which node to run the resource that you are moving. Note When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. You can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the original node; where the resources can run at that point depends on how you have configured your resources initially. If you specify the --master parameter of the pcs resource ban command, the scope of the constraint is limited to the master role and you must specify master_id rather than resource_id . You can optionally configure a lifetime parameter for the pcs resource move command to indicate a period of time the constraint should remain. You specify the units of a lifetime parameter according to the format defined in ISO 8601, which requires that you specify the unit as a capital letter such as Y (for years), M (for months), W (for weeks), D (for days), H (for hours), M (for minutes), and S (for seconds). To distinguish a unit of minutes(M) from a unit of months(M), you must specify PT before indicating the value in minutes. For example, a lifetime parameter of 5M indicates an interval of five months, while a lifetime parameter of PT5M indicates an interval of five minutes. The lifetime parameter is checked at intervals defined by the cluster-recheck-interval cluster property. By default this value is 15 minutes. If your configuration requires that you check this parameter more frequently, you can reset this value with the following command. You can optionally configure a --wait[= n ] parameter for the pcs resource ban command to indicate the number of seconds to wait for the resource to start on the destination node before returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the default resource timeout will be used. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for one hour and thirty minutes. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for thirty minutes. For information on resource constraints, see Chapter 6, Resource Constraints . 7.1.2. Moving a Resource to its Preferred Node After a resource has moved, either due to a failover or to an administrator manually moving the node, it will not necessarily move back to its original node even after the circumstances that caused the failover have been corrected. To relocate resources to their preferred node, use the following command. A preferred node is determined by the current cluster status, constraints, resource location, and other settings and may change over time. If you do not specify any resources, all resource are relocated to their preferred nodes. This command calculates the preferred node for each resource while ignoring resource stickiness. After calculating the preferred node, it creates location constraints which will cause the resources to move to their preferred nodes. Once the resources have been moved, the constraints are deleted automatically. To remove all constraints created by the pcs resource relocate run command, you can run the pcs resource relocate clear command. To display the current status of resources and their optimal node ignoring resource stickiness, run the pcs resource relocate show command. | [
"pcs resource move resource_id [ destination_node ] [--master] [lifetime= lifetime ]",
"pcs property set cluster-recheck-interval= value",
"pcs resource move resource1 example-node2 lifetime=PT1H30M",
"pcs resource move resource1 example-node2 lifetime=PT30M",
"pcs resource relocate run [ resource1 ] [ resource2 ]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/ch-manageresource-haar |
Chapter 152. KafkaMirrorMaker2MirrorSpec schema reference | Chapter 152. KafkaMirrorMaker2MirrorSpec schema reference Used in: KafkaMirrorMaker2Spec Property Property type Description sourceCluster string The alias of the source cluster used by the Kafka MirrorMaker 2 connectors. The alias must match a cluster in the list at spec.clusters . targetCluster string The alias of the target cluster used by the Kafka MirrorMaker 2 connectors. The alias must match a cluster in the list at spec.clusters . sourceConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 source connector. heartbeatConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 heartbeat connector. checkpointConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 checkpoint connector. topicsPattern string A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported. topicsBlacklistPattern string The topicsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.topicsExcludePattern . A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. topicsExcludePattern string A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. groupsPattern string A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported. groupsBlacklistPattern string The groupsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.groupsExcludePattern . A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. groupsExcludePattern string A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMaker2MirrorSpec-reference |
Chapter 29. Graphics Driver Updates | Chapter 29. Graphics Driver Updates The vmwgfx driver has been upgraded to version 2.6.0.0. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/ch29 |
Chapter 24. Managing Certificates for Users, Hosts, and Services | Chapter 24. Managing Certificates for Users, Hosts, and Services Identity Management (IdM) supports two types of certificate authorities (CAs): Integrated IdM CA Integrated CAs can create, revoke, and issue certificates for users, hosts, and services. For more details, see Section 24.1, "Managing Certificates with the Integrated IdM CAs" . IdM supports creating lightweight sub-CAs. For more details, see Section 26.1, "Lightweight Sub-CAs" External CA An external CA is a CA other than the integrated IdM CA. Using IdM tools, you add certificates issued by these CAs to users, services, or hosts as well as remove them. For more details, see Section 24.2, "Managing Certificates Issued by External CAs" . Each user, host, or service can have multiple certificates assigned. Note For more details on the supported CA configurations of the IdM server, see Section 2.3.2, "Determining What CA Configuration to Use" . 24.1. Managing Certificates with the Integrated IdM CAs 24.1.1. Requesting New Certificates for a User, Host, or Service To request a certificate using: the IdM web UI, see the section called "Web UI: Requesting New Certificates" . the command line, see the section called "Command Line: Requesting New Certificates" . Note that you must generate the certificate request itself with a third-party tool. The following procedures use the certutil and openSSL utilities. Important Services typically run on dedicated service nodes on which the private keys are stored. Copying a service's private key to the IdM server is considered insecure. Therefore, when requesting a certificate for a service, create the CSR on the service node. Web UI: Requesting New Certificates Under the Identity tab, select the Users , Hosts , or Services subtab. Click the name of the user, host, or service to open its configuration page. Figure 24.1. List of Hosts Click Actions New Certificate . Optional: Select the issuing CA and profile ID. Follow the instructions on the screen for using certutil . Click Issue . Command Line: Requesting New Certificates Request a new certificate using certutil in standard situations - see Section 24.1.1.1, "Requesting New Certificates Using certutil" . Request a new certificate using openSSL to enable a Kerberos alias to use a host or service certificate - see Section 24.1.1.2, "Preparing a Certificate Request With Multiple SAN Fields Using OpenSSL" . 24.1.1.1. Requesting New Certificates Using certutil Create a temporary directory for the certificate database: Create a new temporary certificate database, for instance: Create the certificate signing request (CSR) and redirect the output to a file. For example, to create a CSR for a 4096 bit certificate and to set the subject to CN=server.example.com,O=EXAMPLE.COM : Submit the certificate request to the CA. For details, see Section 24.1.1.4, "Submitting a Certificate Request to the IdM CA" . 24.1.1.2. Preparing a Certificate Request With Multiple SAN Fields Using OpenSSL Create one or more aliases, for example test1/server.example.com , test2/server.example.com , for your Kerberos principal test/server.example.com . See Section 20.2.1, "Kerberos Principal Alias" for more details. In the CSR, add a subjectAltName for dnsName ( server.example.com ) and otherName ( test2/server.example.com ). To do this, configure the openssl.conf file so that it includes the following line specifying the UPN otherName and subjectAltName: Create a certificate request using openssl : Submit the certificate request to the CA. For details, see Section 24.1.1.4, "Submitting a Certificate Request to the IdM CA" . 24.1.1.3. Requesting New Certificates Using Certmonger You can use the certmonger service to request a certificate from an IdM CA. For details, see the Requesting a CA-signed Certificate Through SCEP section in the System-level Authentication Guide . 24.1.1.4. Submitting a Certificate Request to the IdM CA Submit the certificate request file to the CA running on the IdM server. Be sure to specify the Kerberos principal to associate with the newly-issued certificate: The ipa cert-request command in IdM uses the following defaults: Certificate profile: caIPAserviceCert To select a custom profile, use the --profile-id option with the ipa cert-request command. For further details about creating a custom certificate profile, see Section 24.4.1, "Creating a Certificate Profile" . Integrated CA: ipa (IdM root CA) To select a sub-CA, use the --ca option with the ipa cert-request command. For further details, see the output of the ipa cert-request --help command. 24.1.2. Revoking Certificates with the Integrated IdM CAs If you need to invalidate the certificate before its expiration date, you can revoke it. To revoke a certificate using: the IdM web UI, see the section called "Web UI: Revoking Certificates" the command line, see the section called "Command Line: Revoking Certificates" A revoked certificate is invalid and cannot be used for authentication. All revocations are permanent, except for reason 6: Certificate Hold. Table 24.1. Revocation Reasons ID Reason Explanation 0 Unspecified 1 Key Compromised The key that issued the certificate is no longer trusted. Possible causes: lost token, improperly accessed file. 2 CA Compromised The CA that issued the certificate is no longer trusted. 3 Affiliation Changed Possible causes: A person has left the company or moved to another department. A host or service is being retired. 4 Superseded A newer certificate has replaced the current certificate. 5 Cessation of Operation The host or service is being decommissioned. 6 Certificate Hold The certificate is temporarily revoked. You can restore the certificate later. 8 Remove from CRL The certificate is not included in the certificate revocation list (CRL). 9 Privilege Withdrawn The user, host, or service is no longer permitted to use the certificate. 10 Attribute Authority (AA) Compromise The AA certificate is no longer trusted. Web UI: Revoking Certificates To revoke a certificate: Open the Authentication tab, and select the Certificates subtab. Click the serial number of the certificate to open the certificate information page. Figure 24.2. List of Certificates Click Actions Revoke Certificate . Select the reason for revoking, and click Revoke . See Table 24.1, "Revocation Reasons" for details. Command Line: Revoking Certificates Use the ipa cert-revoke command, and specify: the certificate serial number a number that identifies the reason for the revocation; see Table 24.1, "Revocation Reasons" for details For example, to revoke the certificate with serial number 1032 because of reason 1: Key Compromised: 24.1.3. Restoring Certificates with the Integrated IdM CAs If you have revoked a certificate because of reason 6: Certificate Hold, you can restore it again. To restore a certificate using: the IdM web UI, see the section called "Web UI: Restoring Certificates" the command line, see the section called "Command Line: Restoring Certificates" Web UI: Restoring Certificates Open the Authentication tab, and select the Certificates subtab. Click the serial number of the certificate to open the certificate information page. Figure 24.3. List of Certificates Click Actions Restore Certificate . Command Line: Restoring Certificates Use the ipa cert-remove-hold command and specify the certificate serial number. For example: | [
"mkdir ~/certdb/",
"certutil -N -d ~/certdb/",
"certutil -R -d ~/certdb/ -a -g 4096 -s \" CN=server.example.com,O=EXAMPLE.COM \" -8 server.example.com > certificate_request.csr",
"otherName= 1.3.6.1.4.1.311.20.2.3 ;UTF8: test2/[email protected] DNS.1 = server.example.com",
"openssl req -new -newkey rsa: 2048 -keyout test2service.key -sha256 -nodes -out certificate_request.csr -config openssl.conf",
"ipa cert-request certificate_request.csr --principal= host/server.example.com",
"ipa cert-revoke 1032 --revocation-reason=1",
"ipa cert-remove-hold 1032"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/certificates |
Chapter 6. Troubleshooting networks | Chapter 6. Troubleshooting networks The diagnostic process of troubleshooting network connectivity in Red Hat OpenStack Platform is similar to the diagnostic process for physical networks. If you use VLANs, you can consider the virtual infrastructure as a trunked extension of the physical network, rather than a wholly separate environment. There are some differences between troubleshooting an ML2/OVS network and the default, ML2/OVN network. 6.1. Basic ping testing The ping command is a useful tool for analyzing network connectivity problems. The results serve as a basic indicator of network connectivity, but might not entirely exclude all connectivity issues, such as a firewall blocking the actual application traffic. The ping command sends traffic to specific destinations, and then reports back whether the attempts were successful. Note The ping command is an ICMP operation. To use ping , you must allow ICMP traffic to traverse any intermediary firewalls. Ping tests are most useful when run from the machine experiencing network issues, so it may be necessary to connect to the command line via the VNC management console if the machine seems to be completely offline. For example, the following ping test command validates multiple layers of network infrastructure in order to succeed; name resolution, IP routing, and network switching must all function correctly: You can terminate the ping command with Ctrl-c, after which a summary of the results is presented. Zero percent packet loss indicates that the connection was stable and did not time out. The results of a ping test can be very revealing, depending on which destination you test. For example, in the following diagram VM1 is experiencing some form of connectivity issue. The possible destinations are numbered in blue, and the conclusions drawn from a successful or failed result are presented: The internet - a common first step is to send a ping test to an internet location, such as www.example.com. Success : This test indicates that all the various network points in between the machine and the Internet are functioning correctly. This includes the virtual and physical network infrastructure. Failure : There are various ways in which a ping test to a distant internet location can fail. If other machines on your network are able to successfully ping the internet, that proves the internet connection is working, and the issue is likely within the configuration of the local machine. Physical router - This is the router interface that the network administrator designates to direct traffic onward to external destinations. Success : Ping tests to the physical router can determine whether the local network and underlying switches are functioning. These packets do not traverse the router, so they do not prove whether there is a routing issue present on the default gateway. Failure : This indicates that the problem lies between VM1 and the default gateway. The router/switches might be down, or you may be using an incorrect default gateway. Compare the configuration with that on another server that you know is functioning correctly. Try pinging another server on the local network. Neutron router - This is the virtual SDN (Software-defined Networking) router that Red Hat OpenStack Platform uses to direct the traffic of virtual machines. Success : Firewall is allowing ICMP traffic, the Networking node is online. Failure : Confirm whether ICMP traffic is permitted in the security group of the instance. Check that the Networking node is online, confirm that all the required services are running, and review the L3 agent log ( /var/log/neutron/l3-agent.log ). Physical switch - The physical switch manages traffic between nodes on the same physical network. Success : Traffic sent by a VM to the physical switch must pass through the virtual network infrastructure, indicating that this segment is functioning correctly. Failure : Check that the physical switch port is configured to trunk the required VLANs. VM2 - Attempt to ping a VM on the same subnet, on the same Compute node. Success : The NIC driver and basic IP configuration on VM1 are functional. Failure : Validate the network configuration on VM1. Or, firewall on VM2 might simply be blocking ping traffic. In addition, verify the virtual switching configuration and review the Open vSwitch log files. 6.2. Viewing current port status A basic troubleshooting task is to create an inventory of all of the ports attached to a router and determine the port status ( DOWN or ACTIVE ). Procedure To view all the ports that attach to the router named r1 , run the following command: Sample output To view the details of each port, run the following command. Include the port ID of the port that you want to view. The result includes the port status, indicated in the following example as having an ACTIVE state: Sample output Perform step 2 for each port to determine its status. 6.3. Troubleshooting connectivity to VLAN provider networks OpenStack Networking can trunk VLAN networks through to the SDN switches. Support for VLAN-tagged provider networks means that virtual instances can integrate with server subnets in the physical network. Procedure Ping the gateway with ping <gateway-IP-address> . Consider this example, in which a network is created with these commands: In this example, the gateway IP address is 192.168.120.254 . If the ping fails, do the following: Confirm that you have network flow for the associated VLAN. It is possible that the VLAN ID has not been set. In this example, OpenStack Networking is configured to trunk VLAN 120 to the provider network. (See --provider:segmentation_id=120 in the example in step 1.) Confirm the VLAN flow on the bridge interface using the command, ovs-ofctl dump-flows <bridge-name> . In this example the bridge is named br-ex : 6.4. Reviewing the VLAN configuration and log files To help validate or troubleshoot a deployment, you can: verify the registration and status of Red Hat Openstack Platform (RHOSP) Networking service (neutron) agents. validate network configuration values such as VLAN ranges. Procedure Use the openstack network agent list command to verify that the RHOSP Networking service agents are up and registered with the correct host names. Review /var/log/containers/neutron/openvswitch-agent.log . Look for confirmation that the creation process used the ovs-ofctl command to configure VLAN trunking. Validate external_network_bridge in the /etc/neutron/l3_agent.ini file. If there is a hardcoded value in the external_network_bridge parameter, you cannot use a provider network with the L3-agent, and you cannot create the necessary flows. The external_network_bridge value must be in the format `external_network_bridge = "" `. Check the network_vlan_ranges value in the /etc/neutron/plugin.ini file. For provider networks, do not specify the numeric VLAN ID. Specify IDs only when using VLAN isolated project networks. Validate the OVS agent configuration file bridge mappings , to confirm that the bridge mapped to phy-eno1 exists and is properly connected to eno1 . 6.5. Performing basic ICMP testing within the ML2/OVN namespace As a basic troubleshooting step, you can attempt to ping an instance from an OVN metadata interface that is on the same layer 2 network. Prerequisites RHOSP deployment, with ML2/OVN as the Networking service (neutron) default mechanism driver. Procedure Log in to the overcloud using your Red Hat OpenStack Platform credentials. Run the openstack server list command to obtain the name of a VM instance. Run the openstack server show command to determine the Compute node on which the instance is running. Example Sample output Log in to the Compute node host. Example Run the ip netns list command to see the OVN metadata namespaces. Sample output Using the metadata namespace run an ip netns exec command to ping the associated network. Example Sample output Additional resources server show in the Command Line Interface Reference 6.6. Troubleshooting from within project networks (ML2/OVS) In Red Hat Openstack Platform (RHOSP) ML2/OVS networks, all project traffic is contained within network namespaces so that projects can configure networks without interfering with each other. For example, network namespaces allow different projects to have the same subnet range of 192.168.1.1/24 without interference between them. Prerequisites RHOSP deployment, with ML2/OVS as the Networking service (neutron) default mechanism driver. Procedure Determine which network namespace contains the network, by listing all of the project networks using the openstack network list command: In this output, note that the ID for the web-servers network ( 9cb32fe0-d7fb-432c-b116-f483c6497b08 ). The command appends the network ID to the network namespace, which enables you to identify the namespace in the step. Sample output List all the network namespaces using the ip netns list command: The output contains a namespace that matches the web-servers network ID. In this output, the namespace is qdhcp-9cb32fe0-d7fb-432c-b116-f483c6497b08 . Sample output Examine the configuration of the web-servers network by running commands within the namespace, prefixing the troubleshooting commands with ip netns exec <namespace> . In this example, the route -n command is used. Example Sample output 6.7. Performing advanced ICMP testing within the namespace (ML2/OVS) You can troubleshoot Red Hat Openstack Platform (RHOSP) ML2/OVS networks, using a combination of tcpdump and ping commands. Prerequisites RHOSP deployment, with ML2/OVS as the Networking service (neutron) default mechanism driver. Procedure Capture ICMP traffic using the tcpdump command: Example In a separate command line window, perform a ping test to an external network: Example In the terminal running the tcpdump session, observe detailed results of the ping test. Sample output Note When you perform a tcpdump analysis of traffic, you see the responding packets heading to the router interface rather than to the VM instance. This is expected behavior, as the qrouter performs Destination Network Address Translation (DNAT) on the return packets. 6.8. Creating aliases for OVN troubleshooting commands You run OVN commands, such as ovn-nbctl show , in the ovn_controller container. The container runs on the Controller node and Compute nodes. To simplify your access to the commands, create and source a script that defines aliases. Prerequisites Red Hat OpenStack Platform deployment with ML2/OVN as the default mechanism driver. Procedure Log in to the Controller host as a user that has the necessary privileges to access the OVN containers. Example Create a shell script file that contains the ovn commands that you want to run. Example Add the ovn commands, and save the script file. Example In this example, the ovn-sbctl , ovn-nbctl , and ovn-trace commands have been added to an alias file: Repeat the steps in this procedure on the Compute host. Validation Source the script file. Example Run a command to confirm that your script file works properly. Example Sample output Additional resources ovn-nbctl --help command ovn-sbctl --help command ovn-trace --help command 6.9. Monitoring OVN logical flows OVN uses logical flows that are tables of flows with a priority, match, and actions. These logical flows are distributed to the ovn-controller running on each Red Hat Openstack Platform (RHOSP) Compute node. Use the ovn-sbctl lflow-list command on the Controller node to view the full set of logical flows. Prerequisites RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism driver. Create an alias file for the OVN database commands. See, Section 6.8, "Creating aliases for OVN troubleshooting commands" . Procedure Log in to the Controller host as a user that has the necessary privileges to access the OVN containers. Example Source the alias file for the OVN database commands. For more information, see Section 6.8, "Creating aliases for OVN troubleshooting commands" . Example View the logical flows: Inspect the output. Sample output Key differences between OVN and OpenFlow include: OVN ports are logical entities that reside somewhere on a network, not physical ports on a single switch. OVN gives each table in the pipeline a name in addition to its number. The name describes the purpose of that stage in the pipeline. The OVN match syntax supports complex Boolean expressions. The actions supported in OVN logical flows extend beyond those of OpenFlow. You can implement higher level features, such as DHCP, in the OVN logical flow syntax. Run an OVN trace. The ovn-trace command can simulate how a packet travels through the OVN logical flows, or help you determine why a packet is dropped. Provide the ovn-trace command with the following parameters: DATAPATH The logical switch or logical router where the simulated packet starts. MICROFLOW The simulated packet, in the syntax used by the ovn-sb database. Example This example displays the --minimal output option on a simulated packet and shows that the packet reaches its destination: Sample output Example In more detail, the --summary output for this same simulated packet shows the full execution pipeline: Sample output The sample output shows: The packet enters the sw0 network from the sw0-port1 port and runs the ingress pipeline. The outport variable is set to sw0-port2 indicating that the intended destination for this packet is sw0-port2 . The packet is output from the ingress pipeline, which brings it to the egress pipeline for sw0 with the outport variable set to sw0-port2 . The output action is executed in the egress pipeline, which outputs the packet to the current value of the outport variable, which is sw0-port2 . Additional resources Section 6.8, "Creating aliases for OVN troubleshooting commands" ovn-sbctl --help command ovn-trace --help command 6.10. Monitoring OpenFlows You can use ovs-ofctl dump-flows command to monitor the OpenFlow flows on a logical switch in your Red Hat Openstack Platform (RHOSP) network. Prerequisites RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism driver. Procedure Log in to the Controller host as a user that has the necessary privileges to access the OVN containers. Example Run the ovs-ofctl dump-flows command. Example Inspect the output, which resembles the following output. Sample output Additional resources ovs-ofctl --help command 6.11. Validating your ML2/OVN deployment Validating the ML2/OVN networks on your Red Hat OpenStack Platform (RHOSP) deployment consists of creating a test network and subnet and performing diagnostic tasks such as verifying that specfic containers are running. Prerequisites New deployment of RHOSP, with ML2/OVN as the Networking service (neutron) default mechanism driver. Create an alias file for the OVN database commands. See, Section 6.8, "Creating aliases for OVN troubleshooting commands" . Procedure Create a test network and subnet. If you encounter errors, perform the steps that follow. Verify that the relevant containers are running on the Controller host: Log in to the Controller host as a user that has the necessary privileges to access the OVN containers. Example Enter the following command: As shown in the following sample, the output should list the OVN containers: Sample output Verify that the relevant containers are running on the Compute host: Log in to the Compute host as a user that has the necessary privileges to access the OVN containers. Example Enter the following command: As shown in the following sample, the output should list the OVN containers: Sample output Inspect log files for error messages. Source an alias file to run the OVN database commands. For more information, see Section 6.8, "Creating aliases for OVN troubleshooting commands" . Example Query the northbound and southbound databases to check for responsiveness. Attempt to ping an instance from an OVN metadata interface that is on the same layer 2 network. For more information, see Section 6.5, "Performing basic ICMP testing within the ML2/OVN namespace" . If you need to contact Red Hat for support, perform the steps described in this Red Hat Solution, How to collect all required logs for Red Hat Support to investigate an OpenStack issue . Additional resources network create in the Command Line Interface Reference subnet create in the Command Line Interface Reference Section 6.8, "Creating aliases for OVN troubleshooting commands" ovn-nbctl --help command ovn-sbctl --help command 6.12. Setting the logging mode for ML2/OVN Set ML2/OVN logging to debug mode for additional troubleshooting information. Set logging back to info mode to use less disk space when you do not need additional debugging information. Prerequisites Red Hat OpenStack Platform deployment with ML2/OVN as the default mechanism driver. Procedure Log in to the Controller or Compute node where you want to set the logging mode as a user that has the necessary privileges to access the OVN containers. Example Set the ML2/OVN logging mode. Debug logging mode Info logging mode Verification Confirm that the ovn-controller container log now contains debug messages: Sample output You should see recent log messages that contain the string |DBG| : Confirm that the ovn-controller container log contains a string similar to the following: Additional resources Section 6.14, "ML2/OVN log files" 6.13. Fixing OVN controllers that fail to register on edge sites Issue OVN controllers on Red Hat OpenStack Platform (RHOSP) edge sites fail to register. Note This error can occur on RHOSP 17.0 ML2/OVN deployments that were updated from an earlier RHOSP version- RHOSP 16.1.7 and earlier or RHOSP 16.2.0. Sample error The error encountered is similar to the following: Cause If the ovn-controller process replaces the hostname, it registers another chassis entry which includes another encap entry. For more information, see BZ#1948472 . Resolution Follow these steps to resolve the problem: If you have not already, create aliases for the necessary OVN database commands that you will use later in this procedure. For more information, see Creating aliases for OVN troubleshooting commands . Log in to the Controller host as a user that has the necessary privileges to access the OVN containers. Example Obtain the IP address from the /var/log/containers/openvswitch/ovn-controller.log Confirm that the IP address is correct: Delete the chassis that contains IP address: Check the Chassis_Private table to confirm that chassis has been removed: If any entries are reported, remove them with the following command: Restart the following containers: tripleo_ovn_controller tripleo_ovn_metadata_agent Verification Confirm that OVN agents are running: Sample output 6.14. ML2/OVN log files Log files track events related to the deployment and operation of the ML2/OVN mechanism driver. Table 6.1. ML2/OVN log files per node Nodes Log Path /var/log/containers/openvswitch... Controller, Compute, Networking OVS northbound database server .../ovn-controller.log Controller OVS northbound database server .../ovsdb-server-nb.log Controller OVS southbound database server .../ovsdb-server-sb.log Controller OVN northbound database server .../ovn-northd.log | [
"ping www.example.com PING e1890.b.akamaiedge.net (125.56.247.214) 56(84) bytes of data. 64 bytes from a125-56.247-214.deploy.akamaitechnologies.com (125.56.247.214): icmp_seq=1 ttl=54 time=13.4 ms 64 bytes from a125-56.247-214.deploy.akamaitechnologies.com (125.56.247.214): icmp_seq=2 ttl=54 time=13.5 ms 64 bytes from a125-56.247-214.deploy.akamaitechnologies.com (125.56.247.214): icmp_seq=3 ttl=54 time=13.4 ms ^C",
"--- e1890.b.akamaiedge.net ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 13.461/13.498/13.541/0.100 ms",
"openstack port list --router r1",
"+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | b58d26f0-cc03-43c1-ab23-ccdb1018252a | | fa:16:3e:94:a7:df | {\"subnet_id\": \"a592fdba-babd-48e0-96e8-2dd9117614d3\", \"ip_address\": \"192.168.200.1\"} | | c45e998d-98a1-4b23-bb41-5d24797a12a4 | | fa:16:3e:ee:6a:f7 | {\"subnet_id\": \"43f8f625-c773-4f18-a691-fd4ebfb3be54\", \"ip_address\": \"172.24.4.225\"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+",
"openstack port show b58d26f0-cc03-43c1-ab23-ccdb1018252a",
"+-----------------------+--------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------------------+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | node.example.com | | binding:profile | {} | | binding:vif_details | {\"port_filter\": true, \"ovs_hybrid_plug\": true} | | binding:vif_type | ovs | | binding:vnic_type | normal | | device_id | 49c6ebdc-0e62-49ad-a9ca-58cea464472f | | device_owner | network:router_interface | | extra_dhcp_opts | | | fixed_ips | {\"subnet_id\": \"a592fdba-babd-48e0-96e8-2dd9117614d3\", \"ip_address\": \"192.168.200.1\"} | | id | b58d26f0-cc03-43c1-ab23-ccdb1018252a | | mac_address | fa:16:3e:94:a7:df | | name | | | network_id | 63c24160-47ac-4140-903d-8f9a670b0ca4 | | security_groups | | | status | ACTIVE | | tenant_id | d588d1112e0f496fb6cac22f9be45d49 | +-----------------------+--------------------------------------------------------------------------------------+",
"openstack network create --provider-network-type vlan --provider-physical-network phy-eno1 --provider-segment 120 provider openstack subnet create --no-dhcp --allocation-pool start=192.168.120.1,end=192.168.120.153 --gateway 192.168.120.254 --network provider public_subnet",
"ping 192.168.120.254",
"ovs-ofctl dump-flows br-ex NXST_FLOW reply (xid=0x4): cookie=0x0, duration=987.521s, table=0, n_packets=67897, n_bytes=14065247, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=986.979s, table=0, n_packets=8, n_bytes=648, idle_age=977, priority=2,in_port=12 actions=drop",
"(overcloud)[stack@undercloud~]USD openstack network agent list +--------------------------------------+--------------------+-----------------------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+-----------------------+-------+----------------+ | a08397a8-6600-437d-9013-b2c5b3730c0c | Metadata agent | rhelosp.example.com | :-) | True | | a5153cd2-5881-4fc8-b0ad-be0c97734e6a | L3 agent | rhelosp.example.com | :-) | True | | b54f0be7-c555-43da-ad19-5593a075ddf0 | DHCP agent | rhelosp.example.com | :-) | True | | d2be3cb0-4010-4458-b459-c5eb0d4d354b | Open vSwitch agent | rhelosp.example.com | :-) | True | +--------------------------------------+--------------------+-----------------------+-------+----------------+",
"openstack server show my_instance -c OS-EXT-SRV-ATTR:host -c addresses",
"+----------------------+-------------------------------------------------+ | Field | Value | +----------------------+-------------------------------------------------+ | OS-EXT-SRV-ATTR:host | compute0.overcloud.example.com | | addresses | finance-network1=192.0.2.2; provider- | | | storage=198.51.100.13 | +----------------------+-------------------------------------------------+",
"ssh [email protected]",
"ovnmeta-07384836-6ab1-4539-b23a-c581cf072011 (id: 1) ovnmeta-df9c28ea-c93a-4a60-b913-1e611d6f15aa (id: 0)",
"sudo ip netns exec ovnmeta-df9c28ea-c93a-4a60-b913-1e611d6f15aa ping 192.0.2.2",
"PING 192.0.2.2 (192.0.2.2) 56(84) bytes of data. 64 bytes from 192.0.2.2: icmp_seq=1 ttl=64 time=0.470 ms 64 bytes from 192.0.2.2: icmp_seq=2 ttl=64 time=0.483 ms 64 bytes from 192.0.2.2: icmp_seq=3 ttl=64 time=0.183 ms 64 bytes from 192.0.2.2: icmp_seq=4 ttl=64 time=0.296 ms 64 bytes from 192.0.2.2: icmp_seq=5 ttl=64 time=0.307 ms ^C --- 192.0.2.2 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 122ms rtt min/avg/max/mdev = 0.183/0.347/0.483/0.116 ms",
"openstack network list",
"+--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | 9cb32fe0-d7fb-432c-b116-f483c6497b08 | web-servers | 453d6769-fcde-4796-a205-66ee01680bba 192.168.212.0/24 | | a0cc8cdd-575f-4788-a3e3-5df8c6d0dd81 | private | c1e58160-707f-44a7-bf94-8694f29e74d3 10.0.0.0/24 | | baadd774-87e9-4e97-a055-326bb422b29b | private | 340c58e1-7fe7-4cf2-96a7-96a0a4ff3231 192.168.200.0/24 | | 24ba3a36-5645-4f46-be47-f6af2a7d8af2 | public | 35f3d2cb-6e4b-4527-a932-952a395c4bb3 172.24.4.224/28 | +--------------------------------------+-------------+-------------------------------------------------------+",
"ip netns list",
"qdhcp-9cb32fe0-d7fb-432c-b116-f483c6497b08 qrouter-31680a1c-9b3e-4906-bd69-cb39ed5faa01 qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b qdhcp-a0cc8cdd-575f-4788-a3e3-5df8c6d0dd81 qrouter-e9281608-52a6-4576-86a6-92955df46f56",
"ip netns exec qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b route -n",
"Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.24.4.225 0.0.0.0 UG 0 0 0 qg-8d128f89-87 172.24.4.224 0.0.0.0 255.255.255.240 U 0 0 0 qg-8d128f89-87 192.168.200.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-8efd6357-96",
"ip netns exec qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b tcpdump -qnntpi any icmp",
"ip netns exec qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b ping www.example.com",
"tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes IP (tos 0xc0, ttl 64, id 55447, offset 0, flags [none], proto ICMP (1), length 88) 172.24.4.228 > 172.24.4.228: ICMP host 192.168.200.20 unreachable, length 68 IP (tos 0x0, ttl 64, id 22976, offset 0, flags [DF], proto UDP (17), length 60) 172.24.4.228.40278 > 192.168.200.21: [bad udp cksum 0xfa7b -> 0xe235!] UDP, length 32",
"ssh [email protected]",
"vi ~/bin/ovn-alias.sh",
"REMOTE_IP=USD(sudo ovs-vsctl get open . external_ids:ovn-remote) NBDB=USD(echo USDREMOTE_IP | sed 's/6642/6641/g') SBDB=USDREMOTE_IP alias ovn-sbctl=\"sudo podman exec ovn_controller ovn-sbctl --db=USDSBDB\" alias ovn-nbctl=\"sudo podman exec ovn_controller ovn-nbctl --db=USDNBDB\" alias ovn-trace=\"sudo podman exec ovn_controller ovn-trace --db=USDSBDB\"",
"source ovn-alias.sh",
"ovn-nbctl show",
"switch 26ce22db-1795-41bd-b561-9827cbd81778 (neutron-f8e79863-6c58-43d0-8f7d-8ec4a423e13b) (aka internal_network) port 1913c3ae-8475-4b60-a479-df7bcce8d9c8 addresses: [\"fa:16:3e:33:c1:fc 192.168.254.76\"] port 1aabaee3-b944-4da2-bf0a-573215d3f3d9 addresses: [\"fa:16:3e:16:cb:ce 192.168.254.74\"] port 7e000980-59f9-4a0f-b76a-4fdf4e86f27b type: localport addresses: [\"fa:16:3e:c9:30:ed 192.168.254.2\"]",
"ssh [email protected]",
"source ~/ovn-alias.sh",
"ovn-sbctl lflow-list",
"Datapath: \"sw0\" (d7bf4a7b-e915-4502-8f9d-5995d33f5d10) Pipeline: ingress table=0 (ls_in_port_sec_l2 ), priority=100 , match=(eth.src[40]), action=(drop;) table=0 (ls_in_port_sec_l2 ), priority=100 , match=(vlan.present), action=(drop;) table=0 (ls_in_port_sec_l2 ), priority=50 , match=(inport == \"sw0-port1\" && eth.src == {00:00:00:00:00:01}), action=(next;) table=0 (ls_in_port_sec_l2 ), priority=50 , match=(inport == \"sw0-port2\" && eth.src == {00:00:00:00:00:02}), action=(next;) table=1 (ls_in_port_sec_ip ), priority=0 , match=(1), action=(next;) table=2 (ls_in_port_sec_nd ), priority=90 , match=(inport == \"sw0-port1\" && eth.src == 00:00:00:00:00:01 && arp.sha == 00:00:00:00:00:01), action=(next;) table=2 (ls_in_port_sec_nd ), priority=90 , match=(inport == \"sw0-port1\" && eth.src == 00:00:00:00:00:01 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll == 00:00:00:00:00:01) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:01)))), action=(next;) table=2 (ls_in_port_sec_nd ), priority=90 , match=(inport == \"sw0-port2\" && eth.src == 00:00:00:00:00:02 && arp.sha == 00:00:00:00:00:02), action=(next;) table=2 (ls_in_port_sec_nd ), priority=90 , match=(inport == \"sw0-port2\" && eth.src == 00:00:00:00:00:02 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll == 00:00:00:00:00:02) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:02)))), action=(next;) table=2 (ls_in_port_sec_nd ), priority=80 , match=(inport == \"sw0-port1\" && (arp || nd)), action=(drop;) table=2 (ls_in_port_sec_nd ), priority=80 , match=(inport == \"sw0-port2\" && (arp || nd)), action=(drop;) table=2 (ls_in_port_sec_nd ), priority=0 , match=(1), action=(next;) table=3 (ls_in_pre_acl ), priority=0, match=(1), action=(next;) table=4 (ls_in_pre_lb ), priority=0 , match=(1), action=(next;) table=5 (ls_in_pre_stateful ), priority=100 , match=(reg0[0] == 1), action=(ct_next;) table=5 (ls_in_pre_stateful ), priority=0 , match=(1), action=(next;) table=6 (ls_in_acl ), priority=0 , match=(1), action=(next;) table=7 (ls_in_qos_mark ), priority=0 , match=(1), action=(next;) table=8 (ls_in_lb ), priority=0 , match=(1), action=(next;) table=9 (ls_in_stateful ), priority=100 , match=(reg0[1] == 1), action=(ct_commit(ct_label=0/1); next;) table=9 (ls_in_stateful ), priority=100 , match=(reg0[2] == 1), action=(ct_lb;) table=9 (ls_in_stateful ), priority=0 , match=(1), action=(next;) table=10(ls_in_arp_rsp ), priority=0 , match=(1), action=(next;) table=11(ls_in_dhcp_options ), priority=0 , match=(1), action=(next;) table=12(ls_in_dhcp_response), priority=0 , match=(1), action=(next;) table=13(ls_in_l2_lkup ), priority=100 , match=(eth.mcast), action=(outport = \"_MC_flood\"; output;) table=13(ls_in_l2_lkup ), priority=50 , match=(eth.dst == 00:00:00:00:00:01), action=(outport = \"sw0-port1\"; output;) table=13(ls_in_l2_lkup ), priority=50 , match=(eth.dst == 00:00:00:00:00:02), action=(outport = \"sw0-port2\"; output;) Datapath: \"sw0\" (d7bf4a7b-e915-4502-8f9d-5995d33f5d10) Pipeline: egress table=0 (ls_out_pre_lb ), priority=0 , match=(1), action=(next;) table=1 (ls_out_pre_acl ), priority=0 , match=(1), action=(next;) table=2 (ls_out_pre_stateful), priority=100 , match=(reg0[0] == 1), action=(ct_next;) table=2 (ls_out_pre_stateful), priority=0 , match=(1), action=(next;) table=3 (ls_out_lb ), priority=0 , match=(1), action=(next;) table=4 (ls_out_acl ), priority=0 , match=(1), action=(next;) table=5 (ls_out_qos_mark ), priority=0 , match=(1), action=(next;) table=6 (ls_out_stateful ), priority=100 , match=(reg0[1] == 1), action=(ct_commit(ct_label=0/1); next;) table=6 (ls_out_stateful ), priority=100 , match=(reg0[2] == 1), action=(ct_lb;) table=6 (ls_out_stateful ), priority=0 , match=(1), action=(next;) table=7 (ls_out_port_sec_ip ), priority=0 , match=(1), action=(next;) table=8 (ls_out_port_sec_l2 ), priority=100 , match=(eth.mcast), action=(output;) table=8 (ls_out_port_sec_l2 ), priority=50 , match=(outport == \"sw0-port1\" && eth.dst == {00:00:00:00:00:01}), action=(output;) table=8 (ls_out_port_sec_l2 ), priority=50 , match=(outport == \"sw0-port2\" && eth.dst == {00:00:00:00:00:02}), action=(output;)",
"ovn-trace --minimal sw0 'inport == \"sw0-port1\" && eth.src == 00:00:00:00:00:01 && eth.dst == 00:00:00:00:00:02'",
"reg14=0x1,vlan_tci=0x0000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,dl_type=0x0000 output(\"sw0-port2\");",
"ovn-trace --summary sw0 'inport == \"sw0-port1\" && eth.src == 00:00:00:00:00:01 && eth.dst == 00:00:00:00:00:02'",
"reg14=0x1,vlan_tci=0x0000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,dl_type=0x0000 ingress(dp=\"sw0\", inport=\"sw0-port1\") { outport = \"sw0-port2\"; output; egress(dp=\"sw0\", inport=\"sw0-port1\", outport=\"sw0-port2\") { output; /* output to \"sw0-port2\", type \"\" */; }; };",
"ssh [email protected]",
"sudo ovs-ofctl dump-flows br-int",
"ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=72.132s, table=0, n_packets=0, n_bytes=0, idle_age=72, priority=10,in_port=1,dl_src=00:00:00:00:00:01 actions=resubmit(,1) cookie=0x0, duration=60.565s, table=0, n_packets=0, n_bytes=0, idle_age=60, priority=10,in_port=2,dl_src=00:00:00:00:00:02 actions=resubmit(,1) cookie=0x0, duration=28.127s, table=0, n_packets=0, n_bytes=0, idle_age=28, priority=0 actions=drop cookie=0x0, duration=13.887s, table=1, n_packets=0, n_bytes=0, idle_age=13, priority=0,in_port=1 actions=output:2 cookie=0x0, duration=4.023s, table=1, n_packets=0, n_bytes=0, idle_age=4, priority=0,in_port=2 actions=output:1",
"NETWORK_ID= USD(openstack network create internal_network | awk '/\\| id/ {print USD4}') openstack subnet create internal_subnet --network USDNETWORK_ID --dns-nameserver 8.8.8.8 --subnet-range 192.168.254.0/24",
"ssh [email protected]",
"sudo podman ps -a --format=\"{{.Names}}\"|grep ovn",
"container-puppet-ovn_controller ovn_cluster_north_db_server ovn_cluster_south_db_server ovn_cluster_northd ovn_controller",
"ssh [email protected]",
"sudo podman ps -a --format=\"{{.Names}}\"|grep ovn",
"container-puppet-ovn_controller ovn_metadata_agent ovn_controller",
"grep -r ERR /var/log/containers/openvswitch/ /var/log/containers/neutron/",
"source ~/ovn-alias.sh",
"ovn-nbctl show ovn-sbctl show",
"ssh [email protected]",
"sudo podman exec -it ovn_controller ovn-appctl -t ovn-controller vlog/set dbg",
"sudo podman exec -it ovn_controller ovn-appctl -t ovn-controller vlog/set info",
"sudo grep DBG /var/log/containers/openvswitch/ovn-controller.log",
"2022-09-29T20:52:54.638Z|00170|vconn(ovn_pinctrl0)|DBG|unix:/var/run/openvswitch/br-int.mgmt: received: OFPT_ECHO_REQUEST (OF1.5) (xid=0x0): 0 bytes of payload 2022-09-29T20:52:54.638Z|00171|vconn(ovn_pinctrl0)|DBG|unix:/var/run/openvswitch/br-int.mgmt: sent (Success): OFPT_ECHO_REPLY (OF1.5) (xid=0x0): 0 bytes of payload",
"...received request vlog/set[\"info\"], id=0",
"2021-04-12T09:14:48.994Z|04754|ovsdb_idl|WARN|transaction error: {\"details\":\"Transaction causes multiple rows in \\\"Encap\\\" table to have identical values (geneve and \\\"10.14.2.7\\\") for index on columns \\\"type\\\" and \\\"ip\\\". First row, with UUID 3973cad5-eb8a-4f29-85c3-c105d861c0e0, was inserted by this transaction. Second row, with UUID f06b71a8-4162-475b-8542-d27db3a9097a, existed in the database before this transaction and was not modified by the transaction.\",\"error\":\"constraint violation\"}",
"ssh [email protected]",
"ovn-sbctl list encap |grep -a3 <IP address from ovn-controller.log>",
"ovn-sbctl chassis-del <chassis-id>",
"ovn-sbctl find Chassis_private chassis=\"[]\"",
"ovn-sbctl destroy Chassis_Private <listed_id>",
"sudo systemctl restart tripleo_ovn_controller sudo systemctl restart tripleo_ovn_metadata_agent",
"openstack network agent list -c \"Agent Type\" -c State -c Binary",
"+------------------------------+-------+----------------------------+ | Agent Type | State | Binary | +------------------------------+-------+----------------------------+ | OVN Controller Gateway agent | UP | ovn-controller | | OVN Controller Gateway agent | UP | ovn-controller | | OVN Controller agent | UP | ovn-controller | | OVN Metadata agent | UP | neutron-ovn-metadata-agent | | OVN Controller Gateway agent | UP | ovn-controller | +------------------------------+-------+----------------------------+"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/neutron-troubleshoot_rhosp-network |
Chapter 21. File Systems | Chapter 21. File Systems Btrfs file system, see the section called "Support of Btrfs File System" OverlayFS , see the section called "OverlayFS" | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-tp-file_systems |
11.2. Types | 11.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with rsync . Different types all you to configure flexible access: public_content_t This is a generic type used for the location of files (and the actual files) to be shared via rsync . If a special directory is created to house files to be shared with rsync , the directory and its contents need to have this label applied to them. rsync_exec_t This type is used for the /usr/bin/rsync system binary. rsync_log_t This type is used for the rsync log file, located at /var/log/rsync.log by default. To change the location of the file rsync logs to, use the --log-file=FILE option to the rsync command at run-time. rsync_var_run_t This type is used for the rsyncd lock file, located at /var/run/rsyncd.lock . This lock file is used by the rsync server to manage connection limits. rsync_data_t This type is used for files and directories which you want to use as rsync domains and isolate them from the access scope of other services. Also, the public_content_t is a general SELinux context type, which can be used when a file or a directory interacts with multiple services (for example, FTP and NFS directory as an rsync domain). rsync_etc_t This type is used for rsync -related files in the /etc/ directory. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-rsync-types |
Chapter 39. File Systems | Chapter 39. File Systems The CephFS kernel client is now available Starting with Red Hat Enterprise Linux 7.3, the Ceph File System (CephFS) kernel module enables, as a Technology Preview, Red Hat Enterprise Linux nodes to mount Ceph File Systems from Red Hat Ceph Storage clusters. The kernel client in Red Hat Enterprise Linux is a more efficient alternative to the Filesystem in Userspace (FUSE) client included with Red hat Ceph Storage. Note that the kernel client currently lacks support for CephFS quotas. For more information, see the Ceph File System Guide for Red Hat Ceph Storage 2: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ceph_file_system_guide_technology_preview/index (BZ#1205497) File system DAX is now available for ext4 and XFS as a Technology Preview Starting with Red Hat Enterprise Linux 7.3, Direct Access (DAX) provides, as a Technology Preview, a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1274459) pNFS Block Layout Support As a Technology Preview, the upstream code has been backported to the Red Hat Enterprise Linux client to provide pNFS block layout support. In addition, Red Hat Enterprise Linux 7.3 includes the Technology Preview of the pNFS SCSI layout. This feature is similar to pNFS block layout support, but limited only to SCSI devices, so it is easier to use. Therefore, Red Hat recommends the evaluation of the pNFS SCSI layout rather than the pNFS block layout for most use cases. (BZ#1111712) OverlayFS OverlayFS is a type of union file system. It allows the user to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. Refer to the kernel file Documentation/filesystems/overlayfs.txt for additional information. OverlayFS remains a Technology Preview in Red Hat Enterprise Linux 7.3 under most circumstances. As such, the kernel will log warnings when this technology is activated. Full support is available for OverlayFS when used with Docker under the following restrictions: OverlayFS is only supported for use as a Docker graph driver. Its use can only be supported for container COW content, not for persistent storage. Any persistent storage must be placed on non-OverlayFS volumes to be supported. Only default Docker configuration can be used; that is, one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. SELinux must be enabled and in enforcing mode on the physical machine, but must be disabled in the container when performing container separation; that is, /etc/sysconfig/docker must not contain --selinux-enabled. SELinux support for OverlayFS is being worked on upstream, and is expected in a future release. The OverlayFS kernel ABI and userspace behavior are not considered stable, and may see changes in future updates. In order to make the yum and rpm utilities work properly inside the container, the user should be using the yum-plugin-ovl packages. Note that OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. Note that XFS file systems must be created with the -n ftype=1 option enabled for use as an overlay. With the rootfs and any file systems created during system installation, set the --mkfsoptions=-n ftype=1 parameters in the Anaconda kickstart. When creating a new file system after the installation, run the # mkfs -t xfs -n ftype=1 /PATH/TO/DEVICE command. To determine whether an existing file system is eligible for use as an overlay, run the # xfs_info /PATH/TO/DEVICE | grep ftype command to see if the ftype=1 option is enabled. There are also several known issues associated with OverlayFS as of Red Hat Enterprise Linux 7.3 release. For details, see Non-standard behavior in the Documentation/filesystems/overlayfs.txt file. (BZ#1206277) Support for NFSv4 clients with flexible file layout Support for flexible file layout on NFSv4 clients was first introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. This technology enables advanced features such as non-disruptive file mobility and client-side mirroring, which provides enhanced usability in areas such as databases, big data and virtualization. This feature has been updated in Red Hat Enterprise Linux 7.3, and it is still offered as a Technology Preview. See https://datatracker.ietf.org/doc/draft-ietf-nfsv4-flex-files/ for detailed information about NFS flexible file layout. (BZ#1217590) Btrfs file system The Btrfs (B-Tree) file system is supported as a Technology Preview in Red Hat Enterprise Linux 7.3. This file system offers advanced management, reliability, and scalability features. It enables users to create snapshots, it enables compression and integrated device management. (BZ#1205873) pNFS SCSI layouts client and server support is now provided Client and server support for parallel NFS (pNFS) SCSI layouts is provided as a Technology Preview starting with Red Hat Enterprise Linux 7.3. Building on the work of block layouts, the pNFS layout is defined across SCSI devices and contains sequential series of fixed-size blocks as logical units that must be capable of supporting SCSI persistent reservations. The Logical Unit (LU) devices are identified by their SCSI device identification, and fencing is handled through the assignment of reservations. (BZ#1305092) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/technology_previews_file_systems |
3.4. Multi-port Services and Load Balancer | 3.4. Multi-port Services and Load Balancer LVS routers under any topology require extra configuration when creating multi-port Load Balancer services. Multi-port services can be created artificially by using firewall marks to bundle together different, but related protocols, such as HTTP (port 80) and HTTPS (port 443), or when Load Balancer is used with true multi-port protocols, such as FTP. In either case, the LVS router uses firewall marks to recognize that packets destined for different ports, but bearing the same firewall mark, should be handled identically. Also, when combined with persistence, firewall marks ensure connections from the client machine are routed to the same host, as long as the connections occur within the length of time specified by the persistence parameter. Although the mechanism used to balance the loads on the real servers, IPVS, can recognize the firewall marks assigned to a packet, it cannot itself assign firewall marks. The job of assigning firewall marks must be performed by the network packet filter, iptables . The default firewall administration tool in Red Hat Enterprise Linux 7 is firewalld , which can be used to configure iptables . If preferred, iptables can be used directly. See Red Hat Enterprise Linux 7 Security Guide for information on working with iptables in Red Hat Enterprise Linux 7. 3.4.1. Assigning Firewall Marks Using firewalld To assign firewall marks to a packet destined for a particular port, the administrator can use firewalld 's firewall-cmd utility. If required, confirm that firewalld is running: To start firewalld , enter: To ensure firewalld is enabled to start at system start: This section illustrates how to bundle HTTP and HTTPS as an example; however, FTP is another commonly clustered multi-port protocol. The basic rule to remember when using firewall marks is that for every protocol using a firewall mark in Keepalived there must be a commensurate firewall rule to assign marks to the network packets. Before creating network packet filter rules, make sure there are no rules already in place. To do this, open a shell prompt, login as root , and enter the following command: If no rich rules are present the prompt will instantly reappear. If firewalld is active and rich rules are present, it displays a set of rules. If the rules already in place are important, check the contents of /etc/firewalld/zones/ and copy any rules worth keeping to a safe place before proceeding. Delete unwanted rich rules using a command in the following format: firewall-cmd --zone= zone --remove-rich-rule=' rule ' --permanent The --permanent option makes the setting persistent, but the command will only take effect at system start. If required to make the setting take effect immediately, repeat the command omitting the --permanent option. The first load balancer related firewall rule to be configured is to allow VRRP traffic for the Keepalived service to function. Enter the following command: If the zone is omitted the default zone will be used. Below are rules which assign the same firewall mark, 80 , to incoming traffic destined for the floating IP address, n.n.n.n , on ports 80 and 443. If the zone is omitted the default zone will be used. See the Red Hat Enterprise Linux 7 Security Guide for more information on the use of firewalld 's rich language commands. 3.4.2. Assigning Firewall Marks Using iptables To assign firewall marks to a packet destined for a particular port, the administrator can use iptables . This section illustrates how to bundle HTTP and HTTPS as an example; however, FTP is another commonly clustered multi-port protocol. The basic rule to remember when using firewall marks is that for every protocol using a firewall mark in Keepalived there must be a commensurate firewall rule to assign marks to the network packets. Before creating network packet filter rules, make sure there are no rules already in place. To do this, open a shell prompt, login as root , and enter the following command: /usr/sbin/service iptables status If iptables is not running, the prompt will instantly reappear. If iptables is active, it displays a set of rules. If rules are present, enter the following command: /sbin/service iptables stop If the rules already in place are important, check the contents of /etc/sysconfig/iptables and copy any rules worth keeping to a safe place before proceeding. The first load balancer related configuring firewall rules is to allow VRRP traffic for the Keepalived service to function. Below are rules which assign the same firewall mark, 80 , to incoming traffic destined for the floating IP address, n.n.n.n , on ports 80 and 443. Note that you must log in as root and load the module for iptables before issuing rules for the first time. In the above iptables commands, n.n.n.n should be replaced with the floating IP for your HTTP and HTTPS virtual servers. These commands have the net effect of assigning any traffic addressed to the VIP on the appropriate ports a firewall mark of 80, which in turn is recognized by IPVS and forwarded appropriately. Warning The commands above will take effect immediately, but do not persist through a reboot of the system. | [
"systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: active (running) since Tue 2016-01-26 05:23:53 EST; 7h ago",
"systemctl start firewalld",
"systemctl enable firewalld",
"firewall-cmd --list-rich-rules",
"firewall-cmd --add-rich-rule='rule protocol value=\"vrrp\" accept' --permanent",
"firewall-cmd --add-rich-rule='rule family=\"ipv4\" destination address=\"n.n.n.n/32\" port port=\"80\" protocol=\"tcp\" mark set=\"80\"' --permanent firewall-cmd --add-rich-rule='rule family=\"ipv4\" destination address=\"n.n.n.n/32\" port port=\"443\" protocol=\"tcp\" mark set=\"80\"' --permanent firewall-cmd --reload success firewall-cmd --list-rich-rules rule protocol value=\"vrrp\" accept rule family=\"ipv4\" destination address=\"n.n.n.n/32\" port port=\"80\" protocol=\"tcp\" mark set=80 rule family=\"ipv4\" destination address=\"n.n.n.n/32\" port port=\"443\" protocol=\"tcp\" mark set=80",
"/usr/sbin/iptables -I INPUT -p vrrp -j ACCEPT",
"/usr/sbin/iptables -t mangle -A PREROUTING -p tcp -d n.n.n.n/32 -m multiport --dports 80,443 -j MARK --set-mark 80"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-multi-vsa |
5.10. Configuring IP Address Masquerading | 5.10. Configuring IP Address Masquerading IP masquerading is a process where one computer acts as an IP gateway for a network. For masquerading, the gateway dynamically looks up the IP of the outgoing interface all the time and replaces the source address in the packets with this address. You use masquerading if the IP of the outgoing interface can change. A typical use case for masquerading is if a router replaces the private IP addresses, which are not routed on the internet, with the public dynamic IP address of the outgoing interface on the router. To check if IP masquerading is enabled (for example, for the external zone), enter the following command as root : The command prints yes with exit status 0 if enabled. It prints no with exit status 1 otherwise. If zone is omitted, the default zone will be used. To enable IP masquerading, enter the following command as root : To make this setting persistent, repeat the command adding the --permanent option. To disable IP masquerading, enter the following command as root : To make this setting persistent, repeat the command adding the --permanent option. For more information, see: Section 6.3.1, "The different NAT types: masquerading, source NAT, destination NAT, and redirect" Section 6.3.2, "Configuring masquerading using nftables" | [
"~]# firewall-cmd --zone=external --query-masquerade",
"~]# firewall-cmd --zone=external --add-masquerade",
"~]# firewall-cmd --zone=external --remove-masquerade"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Configuring_IP_Address_Masquerading |
Support | Support OpenShift Container Platform 4.17 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/support/index |
Installing on GCP | Installing on GCP OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Google Cloud Platform Red Hat OpenShift Documentation Team | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 featureSet: TechPreviewNoUpgrade platform: gcp: userLabels: 1 - key: <label_key> 2 value: <label_value> 3 userTags: 4 - parentID: <OrganizationID/ProjectID> 5 key: <tag_key_short_name> value: <tag_value_short_name>",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: platformSpec: type: GCP status: infrastructureName: <cluster_id> 1 platform: GCP platformStatus: gcp: resourceLabels: - key: <label_key> value: <label_value> resourceTags: - key: <tag_key_short_name> parentID: <OrganizationID/ProjectID> value: <tag_value_short_name> type: GCP",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"apiVersion: v1 baseDomain: example.com controlPlane: compute: platform: gcp: osImage: project: redhat-marketplace-public name: redhat-coreos-ocp-413-x86-64-202305021736",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: 16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 17 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 18 region: us-central1 19 defaultMachinePlatform: tags: 20 - global-tag1 - global-tag2 osImage: 21 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA... 10",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 publish: Internal 27",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`",
"cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}",
"export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)",
"export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)",
"export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)",
"export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)",
"cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml",
"export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)",
"export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}",
"def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}",
"cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}",
"cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}",
"cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml",
"export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"",
"gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}",
"gsutil mb gs://<bucket_name>",
"gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>",
"export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz",
"gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"",
"export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)",
"gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition",
"gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/",
"export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`",
"cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap",
"gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign`",
"cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign",
"gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition",
"gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign`",
"cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98",
"export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m",
"export MASTER_SUBNET_CIDR='10.0.0.0/17'",
"export WORKER_SUBNET_CIDR='10.0.128.0/17'",
"export REGION='<region>'",
"export HOST_PROJECT=<host_project>",
"export HOST_PROJECT_ACCOUNT=<host_service_account_email>",
"cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1",
"export HOST_PROJECT_NETWORK=<vpc_network>",
"export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet>",
"export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet>",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}",
"mkdir <installation_directory>",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 5 - control-plane-tag1 - control-plane-tag2 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 8 - compute-tag1 - compute-tag2 replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: gcp: defaultMachinePlatform: tags: 10 - global-tag1 - global-tag2 projectID: openshift-production 11 region: us-central1 12 pullSecret: '{\"auths\": ...}' fips: false 13 sshKey: ssh-ed25519 AAAA... 14 publish: Internal 15",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {}",
"config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: ''",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"export BASE_DOMAIN='<base_domain>' 1 export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 export NETWORK_CIDR='10.0.0.0/16' export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`",
"export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)",
"export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)",
"export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)",
"export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)",
"export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)",
"cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml",
"export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)",
"export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}",
"def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}",
"cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}",
"cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}",
"cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml",
"export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkViewer\"",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}",
"gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"",
"gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}",
"gsutil mb gs://<bucket_name>",
"gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>",
"export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz",
"gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"",
"export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)",
"gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition",
"gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/",
"export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`",
"cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap",
"gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign`",
"cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign",
"gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition",
"gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign`",
"cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98",
"export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com",
"oc get events -n openshift-ingress --field-selector=\"reason=LoadBalancerManualChange\"",
"Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description \"{\\\"kubernetes.io/service-name\\\":\\\"openshift-ingress/router-default\\\", \\\"kubernetes.io/service-ip\\\":\\\"35.237.236.234\\\"}\\\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`",
"gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}",
"gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=\"0.0.0.0/0\" --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}",
"gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=USD{NETWORK_CIDR} --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`",
"cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}",
"export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)",
"export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)",
"export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)",
"export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)",
"cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml",
"export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)",
"export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}",
"def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}",
"cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}",
"cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}",
"cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml",
"export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"",
"gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}",
"gsutil mb gs://<bucket_name>",
"gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>",
"export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz",
"gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"",
"export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)",
"gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition",
"gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/",
"export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`",
"cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap",
"gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign`",
"cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign",
"gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition",
"gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign`",
"cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98",
"export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m",
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"controlPlane: platform: gcp: osImage: project:",
"controlPlane: platform: gcp: osImage: name:",
"compute: platform: gcp: osImage: project:",
"compute: platform: gcp: osImage: name:",
"platform: gcp: network:",
"platform: gcp: networkProjectID:",
"platform: gcp: projectID:",
"platform: gcp: region:",
"platform: gcp: controlPlaneSubnet:",
"platform: gcp: computeSubnet:",
"platform: gcp: defaultMachinePlatform: zones:",
"platform: gcp: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: gcp: defaultMachinePlatform: osDisk: diskType:",
"platform: gcp: defaultMachinePlatform: osImage: project:",
"platform: gcp: defaultMachinePlatform: osImage: name:",
"platform: gcp: defaultMachinePlatform: tags:",
"platform: gcp: defaultMachinePlatform: type:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: name:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: keyRing:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: location:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: projectID:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKeyServiceAccount:",
"platform: gcp: defaultMachinePlatform: secureBoot:",
"platform: gcp: defaultMachinePlatform: confidentialCompute:",
"platform: gcp: defaultMachinePlatform: onHostMaintenance:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"controlPlane: platform: gcp: osDisk: diskSizeGB:",
"controlPlane: platform: gcp: osDisk: diskType:",
"controlPlane: platform: gcp: tags:",
"controlPlane: platform: gcp: type:",
"controlPlane: platform: gcp: zones:",
"controlPlane: platform: gcp: secureBoot:",
"controlPlane: platform: gcp: confidentialCompute:",
"controlPlane: platform: gcp: onHostMaintenance:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"compute: platform: gcp: osDisk: diskSizeGB:",
"compute: platform: gcp: osDisk: diskType:",
"compute: platform: gcp: tags:",
"compute: platform: gcp: type:",
"compute: platform: gcp: zones:",
"compute: platform: gcp: secureBoot:",
"compute: platform: gcp: confidentialCompute:",
"compute: platform: gcp: onHostMaintenance:",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2",
"ccoctl gcp delete --name=<name> \\ 1 --project=<gcp_project_id> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> --force-delete-custom-roles 3"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_on_gcp/index |
Chapter 3. Managing project networks | Chapter 3. Managing project networks Project networks help you to isolate network traffic for cloud computing. Steps to create a project network include planning and creating the network, and adding subnets and routers. 3.1. VLAN planning When you plan your Red Hat OpenStack Platform deployment, you start with a number of subnets, from which you allocate individual IP addresses. When you use multiple subnets you can segregate traffic between systems into VLANs. For example, it is ideal that your management or API traffic is not on the same network as systems that serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to govern traffic flow. You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and IP address utilization for the various types of virtual networking resources in your deployment. Note The maximum number of VLANs in a single network, or in one OVS agent for a network node, is 4094. In situations where you require more than the maximum number of VLANs, you can create several provider networks (VXLAN networks) and several network nodes, one per network. Each node can contain up to 4094 private networks. 3.2. Types of network traffic You can allocate separate VLANs for the different types of network traffic that you want to host. For example, you can have separate VLANs for each of these types of networks. Only the External network must be routable to the external physical network. In this release, director provides DHCP services. Note You do not require all of the isolated VLANs in this section for every OpenStack deployment. For example, if your cloud users do not create ad hoc virtual networks on demand, then you may not require a project network. If you want each VM to connect directly to the same switch as any other physical system, connect your Compute nodes directly to a provider network and configure your instances to use that provider network directly. Provisioning network - This VLAN is dedicated to deploying new nodes using director over PXE boot. OpenStack Orchestration (heat) installs OpenStack onto the overcloud bare metal servers. These servers attach to the physical network to receive the platform installation image from the undercloud infrastructure. Internal API network - The OpenStack services use the Internal API network for communication, including API communication, RPC messages, and database communication. In addition, this network is used for operational messages between controller nodes. When planning your IP address allocation, note that each API service requires its own IP address. Specifically, you must plan IP addresses for each of the following services: vip-msg (ampq) vip-keystone-int vip-glance-int vip-cinder-int vip-nova-int vip-neutron-int vip-horizon-int vip-heat-int vip-ceilometer-int vip-swift-int vip-keystone-pub vip-glance-pub vip-cinder-pub vip-nova-pub vip-neutron-pub vip-horizon-pub vip-heat-pub vip-ceilometer-pub vip-swift-pub Note When using High Availability, Pacemaker moves VIP addresses between the physical nodes. Storage - Block Storage, NFS, iSCSI, and other storage services. Isolate this network to separate physical Ethernet links for performance reasons. Storage Management - OpenStack Object Storage (swift) uses this network to synchronise data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph back end connect over the Storage Management network, since they do not interact with Ceph directly but rather use the front end service. Note that the RBD driver is an exception; this traffic connects directly to Ceph. Project networks - Neutron provides each project with their own networks using either VLAN segregation (where each project network is a network VLAN), or tunneling using VXLAN or GRE. Network traffic is isolated within each project network. Each project network has an IP subnet associated with it, and multiple project networks may use the same addresses. External - The External network hosts the public API endpoints and connections to the Dashboard (horizon). You can also use this network for SNAT. In a production deployment, it is common to use a separate network for floating IP addresses and NAT. Provider networks - Use provider networks to attach instances to existing network infrastructure. You can use provider networks to map directly to an existing physical network in the data center, using flat networking or VLAN tags. This allows an instance to share the same layer-2 network as a system external to the OpenStack Networking infrastructure. 3.3. IP address consumption The following systems consume IP addresses from your allocated range: Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate physical NICs to specific functions. For example, allocate management and NFS traffic to distinct physical NICs, sometimes with multiple NICs connecting across to different switches for redundancy purposes. Virtual IPs (VIPs) for High Availability - Plan to allocate between one and three VIPs for each network that controller nodes share. 3.4. Virtual networking The following virtual resources consume IP addresses in OpenStack Networking. These resources are considered local to the cloud infrastructure, and do not need to be reachable by systems in the external physical network: Project networks - Each project network requires a subnet that it can use to allocate IP addresses to instances. Virtual routers - Each router interface plugging into a subnet requires one IP address. If you want to use DHCP, each router interface requires two IP addresses. Instances - Each instance requires an address from the project subnet that hosts the instance. If you require ingress traffic, you must allocate a floating IP address to the instance from the designated external network. Management traffic - Includes OpenStack Services and API traffic. All services share a small number of VIPs. API, RPC and database services communicate on the internal API VIP. 3.5. Adding network routing To allow traffic to be routed to and from your new network, you must add its subnet as an interface to an existing virtual router: In the dashboard, select Project > Network > Routers . Select your virtual router name in the Routers list, and click Add Interface . In the Subnet list, select the name of your new subnet. You can optionally specify an IP address for the interface in this field. Click Add Interface . Instances on your network can now communicate with systems outside the subnet. 3.6. Example network plan This example shows a number of networks that accommodate multiple subnets, with each subnet being assigned a range of IP addresses: Table 3.1. Example subnet plan Subnet name Address range Number of addresses Subnet Mask Provisioning network 192.168.100.1 - 192.168.100.250 250 255.255.255.0 Internal API network 172.16.1.10 - 172.16.1.250 241 255.255.255.0 Storage 172.16.2.10 - 172.16.2.250 241 255.255.255.0 Storage Management 172.16.3.10 - 172.16.3.250 241 255.255.255.0 Tenant network (GRE/VXLAN) 172.16.4.10 - 172.16.4.250 241 255.255.255.0 External network (incl. floating IPs) 10.1.2.10 - 10.1.3.222 469 255.255.254.0 Provider network (infrastructure) 10.10.3.10 - 10.10.3.250 241 255.255.252.0 3.7. Creating a network Create a network so that your instances can communicate with each other and receive IP addresses using DHCP. For more information about external network connections, see Bridging the physical network . When creating networks, it is important to know that networks can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and prefer a measure of isolation between them. For example, you can designate that only webserver traffic is present on one subnet, while database traffic traverses another. Subnets are isolated from each other, and any instance that wants to communicate with another subnet must have their traffic directed by a router. Consider placing systems that require a high volume of traffic amongst themselves in the same subnet, so that they do not require routing, and can avoid the subsequent latency and load. In the dashboard, select Project > Network > Networks . Click +Create Network and specify the following values: Field Description Network Name Descriptive name, based on the role that the network will perform. If you are integrating the network with an external VLAN, consider appending the VLAN ID number to the name. For example, webservers_122 , if you are hosting HTTP web servers in this subnet, and your VLAN tag is 122 . Or you might use internal-only if you intend to keep the network traffic private, and not integrate the network with an external network. Admin State Controls whether the network is immediately available. Use this field to create the network in a Down state, where it is logically present but inactive. This is useful if you do not intend to enter the network into production immediately. Create Subnet Determines whether to create a subnet. For example, you might not want to create a subnet if you intend to keep this network as a placeholder without network connectivity. Click the button, and specify the following values in the Subnet tab: Field Description Subnet Name Enter a descriptive name for the subnet. Network Address Enter the address in CIDR format, which contains the IP address range and subnet mask in one value. To determine the address, calculate the number of bits masked in the subnet mask and append that value to the IP address range. For example, the subnet mask 255.255.255.0 has 24 masked bits. To use this mask with the IPv4 address range 192.168.122.0, specify the address 192.168.122.0/24. IP Version Specifies the internet protocol version, where valid types are IPv4 or IPv6. The IP address range in the Network Address field must match whichever version you select. Gateway IP IP address of the router interface for your default gateway. This address is the hop for routing any traffic destined for an external location, and must be within the range that you specify in the Network Address field. For example, if your CIDR network address is 192.168.122.0/24, then your default gateway is likely to be 192.168.122.1. Disable Gateway Disables forwarding and isolates the subnet. Click to specify DHCP options: Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the distribution of IP settings to your instances. IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how to allocate IPv6 addresses and additional information: No Options Specified - Select this option if you want to set IP addresses manually, or if you use a non OpenStack-aware method for address allocation. SLAAC (Stateless Address Autoconfiguration) - Instances generate IPv6 addresses based on Router Advertisement (RA) messages sent from the OpenStack Networking router. Use this configuration to create an OpenStack Networking subnet with ra_mode set to slaac and address_mode set to slaac. DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for example, DNS) from the OpenStack Networking DHCPv6 service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateful and address_mode set to dhcpv6-stateful. DHCPv6 stateless - Instances generate IPv6 addresses based on Router Advertisement (RA) messages sent from the OpenStack Networking router. Additional options (for example, DNS) are allocated from the OpenStack Networking DHCPv6 service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateless and address_mode set to dhcpv6-stateless. Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for allocation. DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP distributes these addresses to the instances for name resolution. Important For strategic services such as DNS, it is a best practice not to host them on your cloud. For example, if your cloud hosts DNS and your cloud becomes inoperable, DNS is unavailable and the cloud components cannot do lookups on each other. Host Routes - Static host routes. First, specify the destination network in CIDR format, followed by the hop that you want to use for routing (for example, 192.168.23.0/24, 10.1.31.1). Provide this value if you need to distribute static routes to instances. Click Create . You can view the complete network in the Networks tab. You can also click Edit to change any options as needed. When you create instances, you can configure them now to use its subnet, and they receive any specified DHCP options. 3.8. Working with subnets Use subnets to grant network connectivity to instances. Each instance is assigned to a subnet as part of the instance creation process, therefore it's important to consider proper placement of instances to best accommodate their connectivity requirements. You can create subnets only in pre-existing networks. Remember that project networks in OpenStack Networking can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and prefer a measure of isolation between them. For example, you can designate that only webserver traffic is present on one subnet, while database traffic traverse another. Subnets are isolated from each other, and any instance that wants to communicate with another subnet must have their traffic directed by a router. Therefore, you can lessen network latency and load by grouping systems in the same subnet that require a high volume of traffic between each other. 3.9. Creating a subnet To create a subnet, follow these steps: In the dashboard, select Project > Network > Networks , and click the name of your network in the Networks view. Click Create Subnet , and specify the following values: Field Description Subnet Name Descriptive subnet name. Network Address Address in CIDR format, which contains the IP address range and subnet mask in one value. To determine the CIDR address, calculate the number of bits masked in the subnet mask and append that value to the IP address range. For example, the subnet mask 255.255.255.0 has 24 masked bits. To use this mask with the IPv4 address range 192.168.122.0, specify the address 192.168.122.0/24. IP Version Internet protocol version, where valid types are IPv4 or IPv6. The IP address range in the Network Address field must match whichever protocol version you select. Gateway IP IP address of the router interface for your default gateway. This address is the hop for routing any traffic destined for an external location, and must be within the range that you specify in the Network Address field. For example, if your CIDR network address is 192.168.122.0/24, then your default gateway is likely to be 192.168.122.1. Disable Gateway Disables forwarding and isolates the subnet. Click to specify DHCP options: Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the distribution of IP settings to your instances. IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how to allocate IPv6 addresses and additional information: No Options Specified - Select this option if you want to set IP addresses manually, or if you use a non OpenStack-aware method for address allocation. SLAAC (Stateless Address Autoconfiguration) - Instances generate IPv6 addresses based on Router Advertisement (RA) messages sent from the OpenStack Networking router. Use this configuration to create an OpenStack Networking subnet with ra_mode set to slaac and address_mode set to slaac. DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for example, DNS) from the OpenStack Networking DHCPv6 service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateful and address_mode set to dhcpv6-stateful. DHCPv6 stateless - Instances generate IPv6 addresses based on Router Advertisement (RA) messages sent from the OpenStack Networking router. Additional options (for example, DNS) are allocated from the OpenStack Networking DHCPv6 service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateless and address_mode set to dhcpv6-stateless. Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for allocation. DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP distributes these addresses to the instances for name resolution. Host Routes - Static host routes. First, specify the destination network in CIDR format, followed by the hop that you want to use for routing (for example, 192.168.23.0/24, 10.1.31.1). Provide this value if you need to distribute static routes to instances. Click Create . You can view the subnet in the Subnets list. You can also click Edit to change any options as needed. When you create instances, you can configure them now to use its subnet, and they receive any specified DHCP options. 3.10. Adding a router OpenStack Networking provides routing services using an SDN-based virtual router. Routers are a requirement for your instances to communicate with external subnets, including those in the physical network. Routers and subnets connect using interfaces, with each subnet requiring its own interface to the router. The default gateway of a router defines the hop for any traffic received by the router. Its network is typically configured to route traffic to the external physical network using a virtual bridge. To create a router, complete the following steps: In the dashboard, select Project > Network > Routers , and click Create Router . Enter a descriptive name for the new router, and click Create router . Click Set Gateway to the entry for the new router in the Routers list. In the External Network list, specify the network that you want to receive traffic destined for an external location. Click Set Gateway . After you add a router, you must configure any subnets you have created to send traffic using this router. You do this by creating interfaces between the subnet and the router. Important The default routes for subnets must not be overwritten. When the default route for a subnet is removed, the L3 agent automatically removes the corresponding route in the router namespace too, and network traffic cannot flow to and from the associated subnet. If the existing router namespace route has been removed, to fix this problem, perform these steps: Disassociate all floating IPs on the subnet. Detach the router from the subnet. Re-attach the router to the subnet. Re-attach all floating IPs. 3.11. Purging all resources and deleting a project Use the openstack project purge command to delete all resources that belong to a particular project as well as deleting the project, too. For example, to purge the resources of the test-project project, and then delete the project, run the following commands: 3.12. Deleting a router You can delete a router if it has no connected interfaces. To remove its interfaces and delete a router, complete the following steps: In the dashboard, select Project > Network > Routers , and click the name of the router that you want to delete. Select the interfaces of type Internal Interface , and click Delete Interfaces . From the Routers list, select the target router and click Delete Routers . 3.13. Deleting a subnet You can delete a subnet if it is no longer in use. However, if any instances are still configured to use the subnet, the deletion attempt fails and the dashboard displays an error message. Complete the following steps to delete a specific subnet in a network: In the dashboard, select Project > Network > Networks . Click the name of your network. Select the target subnet, and click Delete Subnets . 3.14. Deleting a network There are occasions where it becomes necessary to delete a network that was previously created, perhaps as housekeeping or as part of a decommissioning process. You must first remove or detach any interfaces where the network is still in use, before you can successfully delete a network. To delete a network in your project, together with any dependent interfaces, complete the following steps: In the dashboard, select Project > Network > Networks . Remove all router interfaces associated with the target network subnets. To remove an interface, find the ID number of the network that you want to delete by clicking on your target network in the Networks list, and looking at the ID field. All the subnets associated with the network share this value in the Network ID field. Navigate to Project > Network > Routers , click the name of your virtual router in the Routers list, and locate the interface attached to the subnet that you want to delete. You can distinguish this subnet from the other subnets by the IP address that served as the gateway IP. You can further validate the distinction by ensuring that the network ID of the interface matches the ID that you noted in the step. Click the Delete Interface button for the interface that you want to delete. Select Project > Network > Networks , and click the name of your network. Click the Delete Subnet button for the subnet that you want to delete. Note If you are still unable to remove the subnet at this point, ensure it is not already being used by any instances. Select Project > Network > Networks , and select the network you would like to delete. Click Delete Networks . | [
"openstack project list +----------------------------------+--------------+ | ID | Name | +----------------------------------+--------------+ | 02e501908c5b438dbc73536c10c9aac0 | test-project | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+--------------+ openstack project purge --project 02e501908c5b438dbc73536c10c9aac0"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/manage-proj-network_rhosp-network |
Part III. Integrating a Linux Domain with an Active Directory Domain: Synchronization | Part III. Integrating a Linux Domain with an Active Directory Domain: Synchronization This part provides instruction on how to synchronize Active Directory and Identity Management users, how to migrate existing environments from synchronization to trust, and how to use ID Views in Active Directory environments. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/sync |
8.5.2. Adding a Cluster Service to the Cluster | 8.5.2. Adding a Cluster Service to the Cluster To add a cluster service to the cluster, follow the steps in this section. Note The examples provided in this section show a cluster service in which all of the resources are at the same level. For information on defining a service in which there is a dependency chain in a resource hierarchy, as well as the rules that govern the behavior of parent and child resources, see Appendix C, HA Resource Behavior . Open /etc/cluster/cluster.conf at any node in the cluster. Add a service section within the rm element for each service. For example: Configure the following parameters (attributes) in the service element: autostart - Specifies whether to autostart the service when the cluster starts. Use '1' to enable and '0' to disable; the default is enabled. domain - Specifies a failover domain (if required). exclusive - Specifies a policy wherein the service only runs on nodes that have no other services running on them. recovery - Specifies a recovery policy for the service. The options are to relocate, restart, disable, or restart-disable the service. Depending on the type of resources you want to use, populate the service with global or service-specific resources For example, here is an Apache service that uses global resources: For example, here is an Apache service that uses service-specific resources: Example 8.10, " cluster.conf with Services Added: One Using Global Resources and One Using Service-Specific Resources " shows an example of a cluster.conf file with two services: example_apache - This service uses global resources web_fs , 127.143.131.100 , and example_server . example_apache2 - This service uses service-specific resources web_fs2 , 127.143.131.101 , and example_server2 . Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3"> ). Save /etc/cluster/cluster.conf . (Optional) Validate the updated file against the cluster schema ( cluster.rng ) by running the ccs_config_validate command. For example: Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes. Verify that the updated configuration file has been propagated. Proceed to Section 8.9, "Verifying a Configuration" . Example 8.10. cluster.conf with Services Added: One Using Global Resources and One Using Service-Specific Resources | [
"<rm> <service autostart=\"1\" domain=\"\" exclusive=\"0\" name=\"\" recovery=\"restart\"> </service> </rm>",
"<rm> <resources> <fs name=\"web_fs\" device=\"/dev/sdd2\" mountpoint=\"/var/www\" fstype=\"ext3\"/> <ip address=\"127.143.131.100\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </resources> <service autostart=\"1\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache\" recovery=\"relocate\"> <fs ref=\"web_fs\"/> <ip ref=\"127.143.131.100\"/> <apache ref=\"example_server\"/> </service> </rm>",
"<rm> <service autostart=\"0\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache2\" recovery=\"relocate\"> <fs name=\"web_fs2\" device=\"/dev/sdd3\" mountpoint=\"/var/www2\" fstype=\"ext3\"/> <ip address=\"127.143.131.101\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server2\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </service> </rm>",
"ccs_config_validate Configuration validates",
"<cluster name=\"mycluster\" config_version=\"3\"> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"1\"/> </method> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"2\"/> </method> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"3\"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=\"fence_apc\" ipaddr=\"apc_ip_example\" login=\"login_example\" name=\"apc\" passwd=\"password_example\"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name=\"example_pri\" nofailback=\"0\" ordered=\"1\" restricted=\"0\"> <failoverdomainnode name=\"node-01.example.com\" priority=\"1\"/> <failoverdomainnode name=\"node-02.example.com\" priority=\"2\"/> <failoverdomainnode name=\"node-03.example.com\" priority=\"3\"/> </failoverdomain> </failoverdomains> <resources> <fs name=\"web_fs\" device=\"/dev/sdd2\" mountpoint=\"/var/www\" fstype=\"ext3\"/> <ip address=\"127.143.131.100\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </resources> <service autostart=\"1\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache\" recovery=\"relocate\"> <fs ref=\"web_fs\"/> <ip ref=\"127.143.131.100\"/> <apache ref=\"example_server\"/> </service> <service autostart=\"0\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache2\" recovery=\"relocate\"> <fs name=\"web_fs2\" device=\"/dev/sdd3\" mountpoint=\"/var/www2\" fstype=\"ext3\"/> <ip address=\"127.143.131.101\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server2\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </service> </rm> </cluster>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-config-add-service-cli-ca |
2.8. Display Configuration | 2.8. Display Configuration If you are installing the X Window System, you can configure it during the kickstart installation by checking the Configure the X Window System option on the Display Configuration window as shown in Figure 2.11, "X Configuration - General" . If this option is not chosen, the X configuration options are disabled and the skipx option is written to the kickstart file. 2.8.1. General The first step in configuring X is to choose the default color depth and resolution. Select them from their respective pulldown menus. Be sure to specify a color depth and resolution that is compatible with the video card and monitor for the system. Figure 2.11. X Configuration - General If you are installing both the GNOME and KDE desktops, you must choose which desktop should be the default. If only one desktop is to be installed, be sure to choose it. Once the system is installed, users can choose which desktop they want to be their default. , choose whether to start the X Window System when the system is booted. This option starts the system in runlevel 5 with the graphical login screen. After the system is installed, this can be changed by modifying the /etc/inittab configuration file. Also select whether to start the Setup Agent the first time the system is rebooted. It is disabled by default, but the setting can be changed to enabled or enabled in reconfiguration mode. Reconfiguration mode enables the language, mouse, keyboard, root password, security level, time zone, and networking configuration options in addition to the default ones. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/rhkstool-display_configuration |
14.7.3. Displaying the Amount of Free Memory in a NUMA Cell | 14.7.3. Displaying the Amount of Free Memory in a NUMA Cell The virsh freecell displays the available amount of memory on the machine within a specified NUMA cell. This command can provide one of three different displays of available memory on the machine depending on the options specified. If no options are used, the total free memory on the machine is displayed. Using the --all option, it displays the free memory in each cell and the total free memory on the machine. By using a numeric argument or with --cellno option along with a cell number it will display the free memory for the specified cell. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-numa_node_management-displaying_the_amount_of_free_memory_in_a_numa_cell |
Chapter 2. Backing up important files | Chapter 2. Backing up important files Backing up important configuration files, inventory files, and modified playbooks makes it easy to restore or redeploy your cluster. Red Hat recommends backing up your configuration after initial deployment, and after confirming the success of any major changes in your cluster. You can also take backups after a node has failed if necessary. Prerequisites Example playbooks and inventory files are stored in the /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment directory. If you have manually created or modified inventory and playbook files and you are not storing them in this directory, ensure that you know the path to their location. Procedure Log in to a hyperconverged host as the root user. Change into the hc-ansible-deployment directory and back up the default archive_config_inventory.yml file. Edit the archive_config_inventory.yml file with details of the cluster you want to back up. hosts The backend FQDN of each host in the cluster that you want to back up. backup_dir The directory in which to store backup files. nbde_setup If you use Network-Bound Disk Encryption, set this to true . Otherwise, set to false . upgrade Set to false . For example: Run the archive_config.yml playbook using your updated inventory file with the backupfiles tag. This creates an archive in the /root directory specific to each host FQDN in the hosts section of the inventory, for example, /root/rhvh-node-host1-backend.example.com-backup.tar.gz . Transfer the backup archives to a different machine. | [
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment cp archive_config_inventory.yml archive_config_inventory.yml.bk",
"all: hosts: host1-backend.example.com : host2-backend.example.com : host3-backend.example.com : vars: backup_dir: /rhhi-backup nbde_setup: true upgrade: false",
"ansible-playbook -i archive_config_inventory.yml archive_config.yml --tags=backupfiles",
"scp /root/rhvh-node-host1-backend.example.com-backup.tar.gz backup-host.example.com:/backups/"
]
| https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/backing-up-important-files |
2.3.3. Preallocate, If Possible | 2.3.3. Preallocate, If Possible If files are preallocated, block allocations can be avoided altogether and the file system can run more efficiently. Newer versions of GFS2 include the fallocate (1) system call, which you can use to preallocate blocks of data. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s2-preallocate-gfs2 |
3.5.3. Creating GPG Keys Using the Command Line | 3.5.3. Creating GPG Keys Using the Command Line Use the following shell command: This command generates a key pair that consists of a public and a private key. Other people use your public key to authenticate and/or decrypt your communications. Distribute your public key as widely as possible, especially to people who you know will want to receive authentic communications from you, such as a mailing list. A series of prompts directs you through the process. Press the Enter key to assign a default value if desired. The first prompt asks you to select what kind of key you prefer: In almost all cases, the default is the correct choice. An RSA/RSA key allows you not only to sign communications, but also to encrypt files. Choose the key size: Again, the default, 2048, is sufficient for almost all users and represents an extremely strong level of security. Choose when the key will expire. It is a good idea to choose an expiration date instead of using the default, which is none . If, for example, the email address on the key becomes invalid, an expiration date will remind others to stop using that public key. Entering a value of 1y , for example, makes the key valid for one year. (You may change this expiration date after the key is generated, if you change your mind.) Before the gpg2 application asks for signature information, the following prompt appears: Enter y to finish the process. Enter your name and email address for your GPG key. Remember this process is about authenticating you as a real individual. For this reason, include your real name. If you choose a bogus email address, it will be more difficult for others to find your public key. This makes authenticating your communications difficult. If you are using this GPG key for self-introduction on a mailing list, for example, enter the email address you use on that list. Use the comment field to include aliases or other information. (Some people use different keys for different purposes and identify each key with a comment, such as "Office" or "Open Source Projects.") At the confirmation prompt, enter the letter O to continue if all entries are correct, or use the other options to fix any problems. Finally, enter a passphrase for your secret key. The gpg2 program asks you to enter your passphrase twice to ensure you made no typing errors. Finally, gpg2 generates random data to make your key as unique as possible. Move your mouse, type random keys, or perform other tasks on the system during this step to speed up the process. Once this step is finished, your keys are complete and ready to use: The key fingerprint is a shorthand "signature" for your key. It allows you to confirm to others that they have received your actual public key without any tampering. You do not need to write this fingerprint down. To display the fingerprint at any time, use this command, substituting your email address: Your "GPG key ID" consists of 8 hex digits identifying the public key. In the example above, the GPG key ID is 1B2AFA1C . In most cases, if you are asked for the key ID, prepend 0x to the key ID, as in 0x6789ABCD . Warning If you forget your passphrase, the key cannot be used and any data encrypted using that key will be lost. | [
"~]USD gpg2 --gen-key",
"Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection?",
"RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048)",
"Please specify how long the key should be valid. 0 = key does not expire d = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years key is valid for? (0)",
"Is this correct (y/N)?",
"pub 1024D/1B2AFA1C 2005-03-31 John Q. Doe <[email protected]> Key fingerprint = 117C FE83 22EA B843 3E86 6486 4320 545E 1B2A FA1C sub 1024g/CEA4B22E 2005-03-31 [expires: 2006-03-31]",
"~]USD gpg2 --fingerprint [email protected]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-encryption-gpg-creating_gpg_keys_using_the_command_line |
35.4. Binding/Unbinding an iface to a Portal | 35.4. Binding/Unbinding an iface to a Portal Whenever iscsiadm is used to scan for interconnects, it will first check the iface.transport settings of each iface configuration in /var/lib/iscsi/ifaces . The iscsiadm utility will then bind discovered portals to any iface whose iface.transport is tcp . This behavior was implemented for compatibility reasons. To override this, use the -I iface_name to specify which portal to bind to an iface , as in: By default, the iscsiadm utility will not automatically bind any portals to iface configurations that use offloading. This is because such iface configurations will not have iface.transport set to tcp . As such, the iface configurations of Chelsio, Broadcom, and ServerEngines ports need to be manually bound to discovered portals. It is also possible to prevent a portal from binding to any existing iface . To do so, use default as the iface_name , as in: To remove the binding between a target and iface , use: To delete all bindings for a specific iface , use: To delete bindings for a specific portal (e.g. for Equalogic targets), use: Note If there are no iface configurations defined in /var/lib/iscsi/iface and the -I option is not used, iscsiadm will allow the network subsystem to decide which device a specific portal should use. [7] Refer to Chapter 36, Scanning iSCSI Targets with Multiple LUNs or Portals for information on proper_target_name . | [
"iscsiadm -m discovery -t st -p target_IP:port -I iface_name -P 1",
"iscsiadm -m discovery -t st -p IP:port -I default -P 1",
"iscsiadm -m node -targetname proper_target_name -I iface0 --op=delete [7]",
"iscsiadm -m node -I iface_name --op=delete",
"iscsiadm -m node -p IP:port -I iface_name --op=delete"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/iface-binding-unbinding-to-portal |
5.159. libtiff | 5.159. libtiff 5.159.1. RHSA-2012:1590 - Moderate: libtiff security update Updated libtiff packages that fix multiple security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libtiff packages contain a library of functions for manipulating Tagged Image File Format (TIFF) files. Security Fixes CVE-2012-4447 A heap-based buffer overflow flaw was found in the way libtiff processed certain TIFF images using the Pixar Log Format encoding. An attacker could create a specially-crafted TIFF file that, when opened, could cause an application using libtiff to crash or, possibly, execute arbitrary code with the privileges of the user running the application. CVE-2012-5581 A stack-based buffer overflow flaw was found in the way libtiff handled DOTRANGE tags. An attacker could use this flaw to create a specially-crafted TIFF file that, when opened, would cause an application linked against libtiff to crash or, possibly, execute arbitrary code. CVE-2012-3401 A heap-based buffer overflow flaw was found in the tiff2pdf tool. An attacker could use this flaw to create a specially-crafted TIFF file that would cause tiff2pdf to crash or, possibly, execute arbitrary code. CVE-2012-4564 A missing return value check flaw, leading to a heap-based buffer overflow, was found in the ppm2tiff tool. An attacker could use this flaw to create a specially-crafted PPM (Portable Pixel Map) file that would cause ppm2tiff to crash or, possibly, execute arbitrary code. The CVE-2012-5581, CVE-2012-3401, and CVE-2012-4564 issues were discovered by Huzaifa Sidhpurwala of the Red Hat Security Response Team. All libtiff users should upgrade to these updated packages, which contain backported patches to resolve these issues. All running applications linked against libtiff must be restarted for this update to take effect. 5.159.2. RHSA-2012:1054 - Important: libtiff security update Updated libtiff packages that fix multiple security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libtiff packages contain a library of functions for manipulating Tagged Image File Format (TIFF) files. Security Fixes CVE-2012-2088 libtiff did not properly convert between signed and unsigned integer values, leading to a buffer overflow. An attacker could use this flaw to create a specially-crafted TIFF file that, when opened, would cause an application linked against libtiff to crash or, possibly, execute arbitrary code. CVE-2012-2113 Multiple integer overflow flaws, leading to heap-based buffer overflows, were found in the tiff2pdf tool. An attacker could use these flaws to create a specially-crafted TIFF file that would cause tiff2pdf to crash or, possibly, execute arbitrary code. All libtiff users should upgrade to these updated packages, which contain backported patches to resolve these issues. All running applications linked against libtiff must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libtiff |
10.4. Restricting Domains for PAM services | 10.4. Restricting Domains for PAM services Important This feature requires SSSD to be running on the system. SSSD enables you to restrict which domains can be accessed by PAM services. SSSD evaluates authentication requests from PAM services based on the user the particular PAM service is running as. Whether the PAM service can access an SSSD domain depends on whether the PAM service user is able to access the domain. An example use case is an environment where external users are allowed to authenticate to an FTP server. The FTP server is running as a separate non-privileged user that should only be able to authenticate to a selected SSSD domain, separate from internal company accounts. With this feature, the administrator can allow the FTP user to only authenticate to selected domains specified in the FTP PAM configuration file. Note This functionality is similar to legacy PAM modules, such as pam_ldap , which were able to use a separate configuration file as a parameter for a PAM module. Options to Restrict Access to Domains The following options are available to restrict access to selected domains: pam_trusted_users in /etc/sssd/sssd.conf This option accepts a list of numerical UIDs or user names representing the PAM services that are to be trusted by SSSD. The default setting is all , which means all service users are trusted and can access any domain. pam_public_domains in /etc/sssd/sssd.conf This option accepts a list of public SSSD domains. Public domains are domains accessible even for untrusted PAM service users. The option also accepts the all and none values. The default value is none , which means no domains are public and untrusted service users therefore cannot access any domain. domains for PAM configuration files This option specifies a list of domains against which a PAM service can authenticate. If you use domains without specifying any domain, the PAM service will not be able to authenticate against any domain, for example: If domains is not used in the PAM configuration file, the PAM service is able to authenticate against all domains, on the condition that the service is running under a trusted user. The domains option in the /etc/sssd/sssd.conf SSSD configuration file also specifies a list of domains to which SSSD attempts to authenticate. Note that the domains option in a PAM configuration file cannot extend the list of domains in sssd.conf , it can only restrict the sssd.conf list of domains by specifying a shorter list. Therefore, if a domain is specified in the PAM file but not in sssd.conf , the PAM service will not be able to authenticate against the domain. The default settings pam_trusted_users = all and pam_public_domains = none specify that all PAM service users are trusted and can access any domain. The domains option for PAM configuration files can be used in this situation to restrict the domains that can be accessed. If you specify a domain using domains in the PAM configuration file while sssd.conf contains pam_public_domains , it might be required to specify the domain in pam_public_domains as well. If pam_public_domains is used but does not include the required domain, the PAM service will not be able to successfully authenticate against the domain if it is running under an untrusted user. Note Domain restrictions defined in a PAM configuration file only apply to authentication actions, not to user lookups. For more information about the pam_trusted_users and pam_public_domains options, see the sssd.conf (5) man page. For more information about the domains option used in PAM configuration files, see the pam_sss (8) man page. Example 10.2. Restricting Domains for a PAM Service To restrict the domains against which a PAM service can authenticate: Make sure SSSD is configured to access the required domain or domains. The domains against which SSSD can authenticate are defined in the domains option in the /etc/sssd/sssd.conf file. Specify the domain or domains to which a PAM service will be able to authenticate. To do this, set the domains option in the PAM configuration file. For example: The PAM service is now only allowed to authenticate against domain1 . | [
"auth required pam_sss.so domains=",
"[sssd] domains = domain1, domain2, domain3",
"auth sufficient pam_sss.so forward_pass domains=domain1 account [default=bad success=ok user_unknown=ignore] pam_sss.so password sufficient pam_sss.so use_authtok"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/restricting_domains |
Chapter 2. Set Up and Configure Infinispan Query | Chapter 2. Set Up and Configure Infinispan Query 2.1. Set Up Infinispan Query 2.1.1. Infinispan Query Dependencies in Library Mode To use the JBoss Data Grid Infinispan Query via Maven, add the following dependencies: Non-Maven users must install all infinispan-embedded-query.jar , infinispan-embedded.jar , jboss-transaction-api_1.1_spec-1.0.1.Final-redhat-2.jar files from the JBoss Data Grid distribution. Warning The Infinispan query API directly exposes the Hibernate Search and the Lucene APIs and cannot be embedded within the infinispan-embedded-query.jar file. Do not include other versions of Hibernate Search and Lucene in the same deployment as infinispan-embedded-query . This action will cause classpath conflicts and result in unexpected behavior. Report a bug | [
"<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-embedded-query</artifactId> <version>USD{infinispan.version}</version> </dependency> <dependency> <groupId>javax.transaction</groupId> <artifactId>transaction-api</artifactId> <version>1.1</version> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/chap-Set_Up_and_Configure_Infinispan_Query |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/developing_and_managing_integrations_using_camel_k/pr01 |
Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster | Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.13 is installed and running on the OpenShift Container Platform version 4.13 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using the following command: Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data foundation Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. | [
"get csv USD(oc get csv -n openshift-storage | grep ocs-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\\.features\\.ocs\\.openshift\\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix",
"caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_and_allocating_storage_resources/adding-file-and-object-storage-to-an-existing-external-ocs-cluster |
Chapter 3. Configuring multisite storage replication | Chapter 3. Configuring multisite storage replication Mirroring or replication is enabled on a per CephBlockPool basis within peer managed clusters and can then be configured on a specific subset of images within the pool. The rbd-mirror daemon is responsible for replicating image updates from the local peer cluster to the same image in the remote cluster. These instructions detail how to create the mirroring relationship between two OpenShift Data Foundation managed clusters. 3.1. Enabling OMAP generator and volume replication on managed clusters Execute the following steps on the Primary managed cluster and the Secondary managed cluster to enable the OMAP and Volume-Replication CSI sidecar containers in the csi-rbdplugin-provisioner Pods. Procedure Run the following patch command to set the value to true for CSI_ENABLE_OMAP_GENERATOR in the rook-ceph-operator-config ConfigMap. Example output: Run the following patch command to set the value to true for CSI_ENABLE_VOLUME_REPLICATION in the rook-ceph-operator-config ConfigMap. Example output: Validate that the following two new CSI sidecar containers per csi-rbdplugin-provisioner pod are added. Example output: Note The new containers are repeated because there are two csi-rbdplugin-provisioner pods for redundancy. 3.2. Installing OpenShift Data Foundation Multicluster Orchestrator OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform's OperatorHub on the Hub cluster. This Multicluster Orchestrator controller, along with the MirrorPeer custom resource, creates a bootstrap token and exchanges this token between the managed clusters. Procedure Navigate to OperatorHub on the Hub cluster and use the keyword filter to search for ODF Multicluster Orchestrator . Click ODF Multicluster Orchestrator tile. Keep all default settings and click Install. The operator resources are installed in openshift-operators and available to all namespaces. Verify that the ODF Multicluster Orchestrator shows a green tick indicating successful installation. 3.3. Creating mirror peer on hub cluster Mirror Peer is a cluster-scoped resource to hold information about the managed clusters that will have a peer-to-peer relationship. Prerequisites Ensure that ODF Multicluster Orchestrator is installed on the Hub cluster . You must have only two clusters per Mirror Peer. Ensure that each cluster has uniquely identifiable cluster names such as ocp4perf1 and ocp4perf2 . Procedure Click ODF Multicluster Orchestrator to view the operator details. You can also click View Operator after the Multicluster Orchestrator is installed successfully. Click on Mirror Peer API Create instance and then select YAML view. Create Mirror Peer in YAML view. Copy the following YAML to filename mirror-peer.yaml after replacing <cluster1> and <cluster2> with the correct names of your managed clusters in the RHACM console. Note There is no need to specify a namespace to create this resource because MirrorPeer is a cluster-scoped resource. Copy the contents of your unique mirror-peer.yaml file into the YAML view. You must completely replace the original content. Click Create at the bottom of the YAML view screen. Verify that you can view Phase status as ExchangedSecret . Note In some deployments, the output for the validation can also be ExchangingSecret which is also an acceptable result. 3.4. Enabling Mirroring on Managed clusters To enable mirroring, you must enable the mirroring setting of the storage cluster for each managed cluster. This is a manual step using CLI and the oc patch command. Important You must run the oc patch storagecluster command on the Primary managed cluster and the Secondary managed cluster as well as the follow-on validation commands after the StorageCluster has mirroring enabled. Procedure Enable cluster level mirroring flag using storage cluster name. Example output: Validate that mirroring is enabled on the default Ceph block pool. Example output: Validate that the rbd-mirror pod is up and running. Example output: Validate the status of the daemon health. Example output: Note It could take up to 10 minutes for the daemon health and health fields to change from Warning to OK . If the status does not change to OK in approximately 10 minutes then use the RHACM console to verify that the Submariner add-on connection is still in a Healthy state. | [
"oc patch cm rook-ceph-operator-config -n openshift-storage --type json --patch '[{ \"op\": \"add\", \"path\": \"/data/CSI_ENABLE_OMAP_GENERATOR\", \"value\": \"true\" }]'",
"configmap/rook-ceph-operator-config patched",
"oc patch cm rook-ceph-operator-config -n openshift-storage --type json --patch '[{ \"op\": \"add\", \"path\": \"/data/CSI_ENABLE_VOLUME_REPLICATION\", \"value\": \"true\" }]'",
"configmap/rook-ceph-operator-config patched",
"for l in USD(oc get pods -n openshift-storage -l app=csi-rbdplugin-provisioner -o jsonpath={.items[*].spec.containers[*].name}) ; do echo USDl ; done | egrep \"csi-omap-generator|volume-replication\"",
"csi-omap-generator volume-replication csi-omap-generator volume-replication",
"apiVersion: multicluster.odf.openshift.io/v1alpha1 kind: MirrorPeer metadata: name: mirrorpeer-<cluster1>-<cluster2> spec: items: - clusterName: <cluster1> storageClusterRef: name: ocs-storagecluster namespace: openshift-storage - clusterName: <cluster2> storageClusterRef: name: ocs-storagecluster namespace: openshift-storage",
"oc patch storagecluster USD(oc get storagecluster -n openshift-storage -o=jsonpath='{.items[0].metadata.name}') -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mirroring\", \"value\": {\"enabled\": true} }]'",
"storagecluster.ocs.openshift.io/ocs-storagecluster patched",
"oc get cephblockpool -n openshift-storage -o=jsonpath='{.items[?(@.metadata.ownerReferences[*].kind==\"StorageCluster\")].spec.mirroring.enabled}{\"\\n\"}'",
"true",
"oc get pods -o name -l app=rook-ceph-rbd-mirror -n openshift-storage",
"pod/rook-ceph-rbd-mirror-a-6486c7d875-56v2v",
"oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{\"\\n\"}'",
"{\"daemon_health\":\"OK\",\"health\":\"OK\",\"image_health\":\"OK\",\"states\":{}}"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/configuring_openshift_data_foundation_for_regional-dr_with_advanced_cluster_management/configuring_multisite_storage_replication |
7.160. pinentry | 7.160. pinentry 7.160.1. RHBA-2015:0755 - pinentry bug fix update Updated pinentry packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The pinentry packages contain a collection of simple personal identification number (PIN) or password entry dialogs, which utilize the Assuan protocol as described by the Project Aegypten. The pinentry packages also contain the command line version of the PIN entry dialog. Bug Fixes BZ# 662770 Due to an auto-detection problem, the pinentry wrapper in some cases attempted to launch the pinentry-gtk program even if it was not installed. The pinentry wrapper has been updated, and the problem no longer occurs. BZ# 704495 Due to lack of UTF-8 support, the output description text got scrambled when the "pinentry getpin" command was used. The same problem could occur when using the GNU Privacy Guard utility that called the "pinentry getpin" command on a key containing non-ASCII characters in its name. To fix this bug, proper UTF-8 translation has been performed, and the pinentry-curses binary file has been compiled against the ncursesw library, which contains wide character support. As a result, the output text is now correct. Users of pinentry are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-pinentry |
Chapter 213. LDIF Component | Chapter 213. LDIF Component Available as of Camel version 2.20 The ldif component allows you to do updates on an LDAP server from a LDIF body content. This component uses a basic URL syntax to access the server. It uses the Apache DS LDAP library to process the LDIF. After processing the LDIF, the response body will be a list of statuses for success/failure of each entry. Note The Apache LDAP API is very sensitive to LDIF syntax errors. If in doubt, refer to the unit tests to see an example of each change type. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ldif</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 213.1. URI format The ldapServerBean portion of the URI refers to a LdapConnection . This should be constructed from a factory at the point of use to avoid connection timeouts. The LDIF component only supports producer endpoints, which means that an ldif URI cannot appear in the from at the start of a route. For SSL configuration, refer to the camel-ldap component where there is an example of setting up a custom SocketFactory instance. You can append query options to the URI in the following format, ?option=value&option=value&... 213.2. Options The LDIF component has no options. The LDIF endpoint is configured using URI syntax: with the following path and query parameters: 213.2.1. Path Parameters (1 parameters): Name Description Default Type ldapConnectionName Required The name of the LdapConnection bean to pull from the registry. Note that this must be of scope prototype to avoid it being shared among threads or using a connection that has timed out. String 213.2.2. Query Parameters (1 parameters): Name Description Default Type synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 213.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.ldif.enabled Whether to enable auto configuration of the ldif component. This is enabled by default. Boolean camel.component.ldif.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 213.4. Body types: The body can be a URL to an LDIF file or an inline LDIF file. To signify the difference in body types, an inline LDIF must start with: If not, the component will try to parse the body as a URL. 213.5. Result The result is returned in the Out body as a ArrayList<java.lang.String> object. This contains either "success" or an Exception message for each LDIF entry. 213.6. LdapConnection The URI, ldif:ldapConnectionName , references a bean with the ID, ldapConnectionName . The ldapConnection can be configured using a LdapConnectionConfig bean. Note that the scope must have a scope of prototype to avoid the connection being shared or picking up a stale connection. The LdapConnection bean may be defined as follows in Spring XML: <bean id="ldapConnectionOptions" class="org.apache.directory.ldap.client.api.LdapConnectionConfig"> <property name="ldapHost" value="USD{ldap.host}"/> <property name="ldapPort" value="USD{ldap.port}"/> <property name="name" value="USD{ldap.username}"/> <property name="credentials" value="USD{ldap.password}"/> <property name="useSsl" value="false"/> <property name="useTls" value="false"/> </bean> <bean id="ldapConnectionFactory" class="org.apache.directory.ldap.client.api.DefaultLdapConnectionFactory"> <constructor-arg index="0" ref="ldapConnectionOptions"/> </bean> <bean id="ldapConnection" factory-bean="ldapConnectionFactory" factory-method="newLdapConnection" scope="prototype"/> or in a OSGi blueprint.xml: 213.7. Samples Following on from the Spring configuration above, the code sample below sends an LDAP request to filter search a group for a member. The Common Name is then extracted from the response. ProducerTemplate<Exchange> template = exchange.getContext().createProducerTemplate(); List<?> results = (Collection<?>) template.sendBody("ldap:ldapConnection, "LDiff goes here"); if (results.size() > 0) { // Check for no errors for (String result : results) { if ("success".equalTo(result)) { // LDIF entry success } else { // LDIF entry failure } } } 213.8. LevelDB Available as of Camel 2.10 Leveldb is a very lightweight and embedable key value database. It allows together with Camel to provide persistent support for various Camel features such as Aggregator. Current features it provides: LevelDBAggregationRepository 213.8.1. Using LevelDBAggregationRepository LevelDBAggregationRepository is an AggregationRepository which on the fly persists the aggregated messages. This ensures that you will not loose messages, as the default aggregator will use an in memory only AggregationRepository . It has the following options: Option Type Description repositoryName String A mandatory repository name. Allows you to use a shared LevelDBFile for multiple repositories. persistentFileName String Filename for the persistent storage. If no file exists on startup a new file is created. levelDBFile LevelDBFile Use an existing configured org.apache.camel.component.leveldb.LevelDBFile instance. sync boolean Camel 2.12: Whether or not the LevelDBFile should sync on write or not. Default is false. By sync on write ensures that its always waiting for all writes to be spooled to disk and thus will not loose updates. See LevelDB docs for more details about async vs sync writes. returnOldExchange boolean Whether the get operation should return the old existing Exchange if any existed. By default this option is false to optimize as we do not need the old exchange when aggregating. useRecovery boolean Whether or not recovery is enabled. This option is by default true . When enabled the Camel Aggregator automatic recover failed aggregated exchange and have them resubmitted. recoveryInterval long If recovery is enabled then a background task is run every x'th time to scan for failed exchanges to recover and resubmit. By default this interval is 5000 millis. maximumRedeliveries int Allows you to limit the maximum number of redelivery attempts for a recovered exchange. If enabled then the Exchange will be moved to the dead letter channel if all redelivery attempts failed. By default this option is disabled. If this option is used then the deadLetterUri option must also be provided. deadLetterUri String An endpoint uri for a Dead Letter Channel where exhausted recovered Exchanges will be moved. If this option is used then the maximumRedeliveries option must also be provided. The repositoryName option must be provided. Then either the persistentFileName or the levelDBFile must be provided. 213.8.2. What is preserved when persisting LevelDBAggregationRepository will only preserve any Serializable compatible message body data types. Message headers must be primitive / string / numbers / etc. If a data type is not such a type its dropped and a WARN is logged. And it only persists the Message body and the Message headers. The Exchange properties are not persisted. 213.8.3. Recovery The LevelDBAggregationRepository will by default recover any failed Exchange. It does this by having a background tasks that scans for failed Exchanges in the persistent store. You can use the checkInterval option to set how often this task runs. The recovery works as transactional which ensures that Camel will try to recover and redeliver the failed Exchange. Any Exchange which was found to be recovered will be restored from the persistent store and resubmitted and send out again. The following headers is set when an Exchange is being recovered/redelivered: Header Type Description Exchange.REDELIVERED Boolean Is set to true to indicate the Exchange is being redelivered. Exchange.REDELIVERY_COUNTER Integer The redelivery attempt, starting from 1. Only when an Exchange has been successfully processed it will be marked as complete which happens when the confirm method is invoked on the AggregationRepository . This means if the same Exchange fails again it will be kept retried until it success. You can use option maximumRedeliveries to limit the maximum number of redelivery attempts for a given recovered Exchange. You must also set the deadLetterUri option so Camel knows where to send the Exchange when the maximumRedeliveries was hit. You can see some examples in the unit tests of camel-leveldb, for example this test . 213.8.3.1. Using LevelDBAggregationRepository in Java DSL In this example we want to persist aggregated messages in the target/data/leveldb.dat file. 213.8.3.2. Using LevelDBAggregationRepository in Spring XML The same example but using Spring XML instead: 213.8.4. Dependencies To use LevelDB in your camel routes you need to add the a dependency on camel-leveldb . If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-leveldb</artifactId> <version>2.10.0</version> </dependency> 213.8.5. See Also Configuring Camel Component Endpoint Getting Started Aggregator HawtDB Components | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ldif</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"ldap:ldapServerBean[?options]",
"ldif:ldapConnectionName",
"version: 1",
"<bean id=\"ldapConnectionOptions\" class=\"org.apache.directory.ldap.client.api.LdapConnectionConfig\"> <property name=\"ldapHost\" value=\"USD{ldap.host}\"/> <property name=\"ldapPort\" value=\"USD{ldap.port}\"/> <property name=\"name\" value=\"USD{ldap.username}\"/> <property name=\"credentials\" value=\"USD{ldap.password}\"/> <property name=\"useSsl\" value=\"false\"/> <property name=\"useTls\" value=\"false\"/> </bean> <bean id=\"ldapConnectionFactory\" class=\"org.apache.directory.ldap.client.api.DefaultLdapConnectionFactory\"> <constructor-arg index=\"0\" ref=\"ldapConnectionOptions\"/> </bean> <bean id=\"ldapConnection\" factory-bean=\"ldapConnectionFactory\" factory-method=\"newLdapConnection\" scope=\"prototype\"/>",
"<bean id=\"ldapConnectionOptions\" class=\"org.apache.directory.ldap.client.api.LdapConnectionConfig\"> <property name=\"ldapHost\" value=\"USD{ldap.host}\"/> <property name=\"ldapPort\" value=\"USD{ldap.port}\"/> <property name=\"name\" value=\"USD{ldap.username}\"/> <property name=\"credentials\" value=\"USD{ldap.password}\"/> <property name=\"useSsl\" value=\"false\"/> <property name=\"useTls\" value=\"false\"/> </bean> <bean id=\"ldapConnectionFactory\" class=\"org.apache.directory.ldap.client.api.DefaultLdapConnectionFactory\"> <argument ref=\"ldapConnectionOptions\"/> </bean> <bean id=\"ldapConnection\" factory-ref=\"ldapConnectionFactory\" factory-method=\"newLdapConnection\" scope=\"prototype\"/>",
"ProducerTemplate<Exchange> template = exchange.getContext().createProducerTemplate(); List<?> results = (Collection<?>) template.sendBody(\"ldap:ldapConnection, \"LDiff goes here\"); if (results.size() > 0) { // Check for no errors for (String result : results) { if (\"success\".equalTo(result)) { // LDIF entry success } else { // LDIF entry failure } } }",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-leveldb</artifactId> <version>2.10.0</version> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ldif-component |
Chapter 10. Cluster Quorum | Chapter 10. Cluster Quorum A Red Hat Enterprise Linux High Availability Add-On cluster uses the votequorum service, in conjunction with fencing, to avoid split brain situations. A number of votes is assigned to each system in the cluster, and cluster operations are allowed to proceed only when a majority of votes is present. The service must be loaded into all nodes or none; if it is loaded into a subset of cluster nodes, the results will be unpredictable. For information on the configuration and operation of the votequorum service, see the votequorum (5) man page. 10.1. Configuring Quorum Options There are some special features of quorum configuration that you can set when you create a cluster with the pcs cluster setup command. Table 10.1, "Quorum Options" summarizes these options. Table 10.1. Quorum Options Option Description --auto_tie_breaker When enabled, the cluster can suffer up to 50% of the nodes failing at the same time, in a deterministic fashion. The cluster partition, or the set of nodes that are still in contact with the nodeid configured in auto_tie_breaker_node (or lowest nodeid if not set), will remain quorate. The other nodes will be inquorate. The auto_tie_breaker option is principally used for clusters with an even number of nodes, as it allows the cluster to continue operation with an even split. For more complex failures, such as multiple, uneven splits, it is recommended that you use a quorum device, as described in Section 10.5, "Quorum Devices" . The auto_tie_breaker option is incompatible with quorum devices. --wait_for_all When enabled, the cluster will be quorate for the first time only after all nodes have been visible at least once at the same time. The wait_for_all option is primarily used for two-node clusters and for even-node clusters using the quorum device lms (last man standing) algorithm. The wait_for_all option is automatically enabled when a cluster has two nodes, does not use a quorum device, and auto_tie_breaker is disabled. You can override this by explicitly setting wait_for_all to 0. --last_man_standing When enabled, the cluster can dynamically recalculate expected_votes and quorum under specific circumstances. You must enable wait_for_all when you enable this option. The last_man_standing option is incompatible with quorum devices. --last_man_standing_window The time, in milliseconds, to wait before recalculating expected_votes and quorum after a cluster loses nodes. For further information about configuring and using these options, see the votequorum (5) man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-Quorum-HAAR |
2.7. Setting Parameters | 2.7. Setting Parameters Set subsystem parameters by running the cgset command from a user account with permission to modify the relevant cgroup. For example, if cpuset is mounted to /cgroup/cpu_and_mem/ and the /cgroup/cpu_and_mem/group1 subdirectory exists, specify the CPUs to which this group has access with the following command: The syntax for cgset is: where: parameter is the parameter to be set, which corresponds to the file in the directory of the given cgroup. value is the value for the parameter. path_to_cgroup is the path to the cgroup relative to the root of the hierarchy . For example, to set the parameter of the root group (if the cpuacct subsystem is mounted to /cgroup/cpu_and_mem/ ), change to the /cgroup/cpu_and_mem/ directory, and run: Alternatively, because . is relative to the root group (that is, the root group itself) you could also run: Note, however, that / is the preferred syntax. Note Only a small number of parameters can be set for the root group (such as the cpuacct.usage parameter shown in the examples above). This is because a root group owns all of the existing resources, therefore, it would make no sense to limit all existing processes by defining certain parameters, for example the cpuset.cpu parameter. To set the parameter of group1 , which is a subgroup of the root group, run: A trailing slash after the name of the group (for example, cpuacct.usage=0 group1/ ) is optional. The values that you can set with cgset might depend on values set higher in a particular hierarchy. For example, if group1 is limited to use only CPU 0 on a system, you cannot set group1/subgroup1 to use CPUs 0 and 1, or to use only CPU 1. You can also use cgset to copy the parameters of one cgroup into another existing cgroup. For example: The syntax to copy parameters with cgset is: where: path_to_source_cgroup is the path to the cgroup whose parameters are to be copied, relative to the root group of the hierarchy. path_to_target_cgroup is the path to the destination cgroup, relative to the root group of the hierarchy. Ensure that any mandatory parameters for the various subsystems are set before you copy parameters from one group to another, or the command will fail. For more information on mandatory parameters, refer to Important . Alternative method To set parameters in a cgroup directly, insert values into the relevant subsystem pseudofile using the echo command. In the following example, the echo command inserts the value of 0-1 into the cpuset.cpus pseudofile of the cgroup group1 : With this value in place, the tasks in this cgroup are restricted to CPUs 0 and 1 on the system. | [
"cpu_and_mem]# cgset -r cpuset.cpus=0-1 group1",
"cgset -r parameter = value path_to_cgroup",
"cpu_and_mem]# cgset -r cpuacct.usage=0 /",
"cpu_and_mem]# cgset -r cpuacct.usage=0 .",
"cpu_and_mem]# cgset -r cpuacct.usage=0 group1",
"cpu_and_mem]# cgset --copy-from group1/ group2/",
"cgset --copy-from path_to_source_cgroup path_to_target_cgroup",
"~]# echo 0-1 > /cgroup/cpu_and_mem/group1/cpuset.cpus"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/setting_parameters |
Getting started with .NET on RHEL 9 | Getting started with .NET on RHEL 9 .NET 9.0 Installing and running .NET 9.0 on RHEL 9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_rhel_9/index |
Chapter 2. Using the configuration API | Chapter 2. Using the configuration API The configuration tool exposes 4 endpoints that can be used to build, validate, bundle and deploy a configuration. The config-tool API is documented at https://github.com/quay/config-tool/blob/master/pkg/lib/editor/API.md . In this section, you will see how to use the API to retrieve the current configuration and how to validate any changes you make. 2.1. Retrieving the default configuration If you are running the configuration tool for the first time, and do not have an existing configuration, you can retrieve the default configuration. Start the container in config mode: Use the config endpoint of the configuration API to get the default: The value returned is the default configuration in JSON format: { "config.yaml": { "AUTHENTICATION_TYPE": "Database", "AVATAR_KIND": "local", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DEFAULT_TAG_EXPIRATION": "2w", "EXTERNAL_TLS_TERMINATION": false, "FEATURE_ACTION_LOG_ROTATION": false, "FEATURE_ANONYMOUS_ACCESS": true, "FEATURE_APP_SPECIFIC_TOKENS": true, .... } } 2.2. Retrieving the current configuration If you have already configured and deployed the Quay registry, stop the container and restart it in configuration mode, loading the existing configuration as a volume: Use the config endpoint of the API to get the current configuration: The value returned is the current configuration in JSON format, including database and Redis configuration data: { "config.yaml": { .... "BROWSER_API_CALLS_XHR_ONLY": false, "BUILDLOGS_REDIS": { "host": "quay-server", "password": "strongpassword", "port": 6379 }, "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay", "DEFAULT_TAG_EXPIRATION": "2w", .... } } 2.3. Validating configuration using the API You can validate a configuration by posting it to the config/validate endpoint: The returned value is an array containing the errors found in the configuration. If the configuration is valid, an empty array [] is returned. 2.4. Determining the required fields You can determine the required fields by posting an empty configuration structure to the config/validate endpoint: The value returned is an array indicating which fields are required: [ { "FieldGroup": "Database", "Tags": [ "DB_URI" ], "Message": "DB_URI is required." }, { "FieldGroup": "DistributedStorage", "Tags": [ "DISTRIBUTED_STORAGE_CONFIG" ], "Message": "DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location." }, { "FieldGroup": "HostSettings", "Tags": [ "SERVER_HOSTNAME" ], "Message": "SERVER_HOSTNAME is required" }, { "FieldGroup": "HostSettings", "Tags": [ "SERVER_HOSTNAME" ], "Message": "SERVER_HOSTNAME must be of type Hostname" }, { "FieldGroup": "Redis", "Tags": [ "BUILDLOGS_REDIS" ], "Message": "BUILDLOGS_REDIS is required" } ] | [
"sudo podman run --rm -it --name quay_config -p 8080:8080 registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret",
"curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq",
"{ \"config.yaml\": { \"AUTHENTICATION_TYPE\": \"Database\", \"AVATAR_KIND\": \"local\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DEFAULT_TAG_EXPIRATION\": \"2w\", \"EXTERNAL_TLS_TERMINATION\": false, \"FEATURE_ACTION_LOG_ROTATION\": false, \"FEATURE_ANONYMOUS_ACCESS\": true, \"FEATURE_APP_SPECIFIC_TOKENS\": true, . } }",
"sudo podman run --rm -it --name quay_config -p 8080:8080 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.13.3 config secret",
"curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq",
"{ \"config.yaml\": { . \"BROWSER_API_CALLS_XHR_ONLY\": false, \"BUILDLOGS_REDIS\": { \"host\": \"quay-server\", \"password\": \"strongpassword\", \"port\": 6379 }, \"DATABASE_SECRET_KEY\": \"4b1c5663-88c6-47ac-b4a8-bb594660f08b\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DB_URI\": \"postgresql://quayuser:quaypass@quay-server:5432/quay\", \"DEFAULT_TAG_EXPIRATION\": \"2w\", . } }",
"curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { \"config.yaml\": { . \"BROWSER_API_CALLS_XHR_ONLY\": false, \"BUILDLOGS_REDIS\": { \"host\": \"quay-server\", \"password\": \"strongpassword\", \"port\": 6379 }, \"DATABASE_SECRET_KEY\": \"4b1c5663-88c6-47ac-b4a8-bb594660f08b\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DB_URI\": \"postgresql://quayuser:quaypass@quay-server:5432/quay\", \"DEFAULT_TAG_EXPIRATION\": \"2w\", . } } http://quay-server:8080/api/v1/config/validate | jq",
"curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { \"config.yaml\": { } } http://quay-server:8080/api/v1/config/validate | jq",
"[ { \"FieldGroup\": \"Database\", \"Tags\": [ \"DB_URI\" ], \"Message\": \"DB_URI is required.\" }, { \"FieldGroup\": \"DistributedStorage\", \"Tags\": [ \"DISTRIBUTED_STORAGE_CONFIG\" ], \"Message\": \"DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location.\" }, { \"FieldGroup\": \"HostSettings\", \"Tags\": [ \"SERVER_HOSTNAME\" ], \"Message\": \"SERVER_HOSTNAME is required\" }, { \"FieldGroup\": \"HostSettings\", \"Tags\": [ \"SERVER_HOSTNAME\" ], \"Message\": \"SERVER_HOSTNAME must be of type Hostname\" }, { \"FieldGroup\": \"Redis\", \"Tags\": [ \"BUILDLOGS_REDIS\" ], \"Message\": \"BUILDLOGS_REDIS is required\" } ]"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/config-using-api |
8.66. hypervkvpd | 8.66. hypervkvpd 8.66.1. RHBA-2013:1539 - hypervkvpd bug fix update Updated hypervkvpd packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The hypervkvpd packages contain hypervkvpd, the guest Hyper-V Key-Value Pair (KVP) daemon. Using VMbus, hypervkvpd passes basic information to the host. The information includes guest IP address, fully qualified domain name, operating system name, and operating system release number. An IP injection functionality enables the user to change the IP address of a guest from the host via the hypervkvpd daemon. Bug Fixes BZ# 920032 Previously, the hypervkvpd service registered to two netlink multicast groups, one of which was used by the cgred service. When hypervkvpd received a netlink message, it was interpreted blindly as its own. As a consequence, hypervkvpd terminated unexpectedly with a segmentation fault. After this update, hypervkvpd now registers only to its own netlink multicast group and verifies the type of the incoming netlink message. Using hypervkvpd when the cgred service is running no longer leads to a segmentation fault. BZ# 962565 Prior to this update, the hypervkvpd init script did not check if Hyper-V driver modules were loaded into the kernel. If hypervkvpd was installed, it started automatically on system boot, even if the system was not running as a guest machine on a Hyper-V hypervisor. Verification has been added to the hypervkvpd init script to determine whether Hyper-V driver modules are loaded into the kernel. As a result, if the modules are not loaded into the kernel, hypervkvpd now does not start, but displays a message that proper driver modules are not loaded. BZ# 977861 Previously, hypervkvpd was not built with sufficiently secure compiler options, which could, consequently, make the compiled code vulnerable. The hypervkvpd daemon has been built with full read-only relocation (RELRO) and position-independent executable (PIE) flags. As a result, the compiled code is more secure and better guarded against possible buffer overflows. BZ# 983851 When using the Get-VMNetworkAdapter command to query a virtual machine network adapter, each subnet string has to be separated by a semicolon. Due to a bug in the IPv6 subnet enumeration code, the IPv6 addresses were not listed. A patch has been applied, and the IPv6 subnet enumeration now works as expected. Users of hypervkvpd are advised to upgrade to these updated packages, which fix these bugs. After updating the hypervkvpd packages, rebooting all guest machines is recommended, otherwise the Microsoft Windows server with Hyper-V might not be able to get information from these guest machines. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/hypervkvpd |
Authentication | Authentication builds for Red Hat OpenShift 1.1 Understanding authentication at runtime Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html/authentication/index |
Upgrading disconnected Red Hat Satellite to 6.15 | Upgrading disconnected Red Hat Satellite to 6.15 Red Hat Satellite 6.15 Upgrade Disconnected Satellite Server and Capsule Red Hat Satellite Documentation Team [email protected] | [
"satellite-maintain service stop",
"satellite-maintain service start",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dhcp-managed=false",
"rm /etc/yum.repos.d/*",
"cp /media/sat6/Satellite/media.repo /etc/yum.repos.d/satellite.repo",
"vi /etc/yum.repos.d/satellite.repo",
"[Satellite-6.15]",
"baseurl=file:///media/sat6/Satellite",
"cp /media/sat6/Maintenance/media.repo /etc/yum.repos.d/satellite-maintenance.repo",
"vi /etc/yum.repos.d/satellite-maintenance.repo",
"[Satellite-Maintenance]",
"baseurl=file:///media/sat6/Maintenance/",
"dnf module enable satellite-maintenance:el8",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.15 --whitelist=\"repositories-validate,repositories-setup\"",
"satellite-maintain upgrade run --target-version 6.15 --whitelist=\"repositories-validate,repositories-setup\"",
"dnf needs-restarting --reboothint",
"reboot",
"satellite-maintain service restart",
"foreman-rake foreman_openscap:bulk_upload:default",
"yum clean metadata",
"satellite-maintain self-upgrade",
"grep foreman_url /etc/foreman-proxy/settings.yml",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.15",
"satellite-maintain upgrade run --target-version 6.15",
"dnf needs-restarting --reboothint",
"reboot",
"satellite-installer --foreman-db-host newpostgres.example.com --katello-candlepin-db-host newpostgres.example.com --foreman-proxy-content-pulpcore-postgresql-host newpostgres.example.com"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/upgrading_disconnected_red_hat_satellite_to_6.15/index |
Preface | Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide helps you to understand the installation, migration and upgrade requirements for deploying the Ansible Automation Platform Operator on OpenShift Container Platform. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/pr01 |
Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment | Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the legacy Migration Toolkit for Containers Operator on an OpenShift Container Platform 4.2 to 4.5 source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.16 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . Install the Migration Toolkit for Containers Operator on the source cluster: OpenShift Container Platform 4.6 or later: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager. OpenShift Container Platform 4.2 to 4.5: Install the legacy Migration Toolkit for Containers Operator from the command line interface. Configure object storage to use as a replication repository. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . To uninstall MTC, see Uninstalling MTC and deleting resources . 4.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 4.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 4.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.16 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.16 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 4.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.16. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your OpenShift Container Platform source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 4.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.16, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 4.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 4.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 4.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 4.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 4.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 4.4.2.1. NetworkPolicy configuration 4.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 4.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 4.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 4.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 4.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 4.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 4.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 4.5. Running Rsync as either root or non-root Important This section applies only when you are working with the OpenShift API, not the web console. OpenShift environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, MTC 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges prior to migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 4.5.1. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 4.5.2. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 4.6. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 4.6.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 4.6.2. Retrieving Multicloud Object Gateway credentials Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . 4.6.3. Additional resources Procedure Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 4.7. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero') | [
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc",
"registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator",
"containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/migration_toolkit_for_containers/installing-mtc-restricted |
probe::tty.open | probe::tty.open Name probe::tty.open - Called when a tty is opened Synopsis Values inode_state the inode state file_name the file name file_mode the file mode file_flags the file flags inode_number the inode number inode_flags the inode flags | [
"tty.open"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tty-open |
13.4. Disabling Command-Line Access | 13.4. Disabling Command-Line Access To disable command-line access for your desktop user, you need to make configuration changes in a number of different contexts. Bear in mind that the following steps do not remove the desktop user's permissions to access a command line, but rather remove the ways that the desktop user could access command line. Set the org.gnome.desktop.lockdown.disable-command-line GSettings key, which prevents the user from accessing the terminal or specifying a command line to be executed (the Alt + F2 command prompt). Disable switching to virtual terminals (VTs) with the Ctrl + Alt + function key shortcuts by modifying the X server configuration. Remove Terminal and any other application that provides access to the terminal from the Applications menu and Activities Overview in GNOME Shell. This is done by removing menu items for those applications. For detailed information on how to remove a menu item, see Section 12.1.2, "Removing a Menu Item for All Users" . 13.4.1. Setting the org.gnome.desktop.lockdown.disable-command-line Key Create a local database for machine-wide settings in /etc/dconf/db/local.d/00-lockdown : Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/lockdown : Update the system databases: Users must log out and back in again before the system-wide settings take effect. 13.4.2. Disabling Virtual Terminal Switching Users can normally use the Ctrl + Alt + function key shortcuts (for example Ctrl + Alt + F2 ) to switch from the GNOME Desktop and X server to a virtual terminal. You can disable access to all virtual terminals by adding a DontVTSwitch option to the Serverflags section in an X configuration file in the /etc/X11/xorg.conf.d/ directory. Procedure 13.4. Disabling Access to Virtual Terminals Create or edit an X configuration file in the /etc/X11/xorg.conf.d/ directory: Note By convention, these host-specific configuration file names start with two digits and a hyphen and always have the .conf extension. Thus, the following file name can be /etc/X11/xorg.conf.d/10-xorg.conf . Section "Serverflags" Option "DontVTSwitch" "yes" EndSection Restart the X server for your changes to take effect. | [
"Disable command-line access disable-command-line=true",
"Lock the disabled command-line access /org/gnome/desktop/lockdown",
"dconf update",
"Section \"Serverflags\" Option \"DontVTSwitch\" \"yes\" EndSection"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/disable-command-line-access |
Chapter 1. Managing the storage cluster size | Chapter 1. Managing the storage cluster size As a storage administrator, you can manage the storage cluster size by adding or removing Ceph Monitors or OSDs as storage capacity expands or shrinks. You can manage the storage cluster size by using Ceph Ansible, or by using the command-line interface (CLI). 1.1. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor and OSD nodes. 1.2. Ceph Monitors Ceph Monitors are lightweight processes that maintain a master copy of the storage cluster map. All Ceph clients contact a Ceph monitor and retrieve the current copy of the storage cluster map, enabling clients to bind to a pool and read and write data. Ceph Monitors use a variation of the Paxos protocol to establish consensus about maps and other critical information across the storage cluster. Due to the nature of Paxos, Ceph requires a majority of monitors running to establish a quorum, thus establishing consensus. Important Red Hat requires at least three monitors on separate hosts to receive support for a production cluster. Red Hat recommends deploying an odd number of monitors. An odd number of Ceph Monitors has a higher resiliency to failures than an even number of monitors. For example, to maintain a quorum on a two-monitor deployment, Ceph cannot tolerate any failures; with three monitors, one failure; with four monitors, one failure; with five monitors, two failures. This is why an odd number is advisable. Summarizing, Ceph needs a majority of monitors to be running and to be able to communicate with each other, two out of three, three out of four, and so on. For an initial deployment of a multi-node Ceph storage cluster, Red Hat requires three monitors, increasing the number two at a time if a valid need for more than three monitors exists. Since Ceph Monitors are lightweight, it is possible to run them on the same host as OpenStack nodes. However, Red Hat recommends running monitors on separate hosts. Important Red Hat does NOT support collocating Ceph Monitors and OSDs on the same node. Doing this can have a negative impact to storage cluster performance. Red Hat ONLY supports collocating Ceph services in containerized environments. When you remove monitors from a storage cluster, consider that Ceph Monitors use the Paxos protocol to establish a consensus about the master storage cluster map. You must have a sufficient number of Ceph Monitors to establish a quorum. Additional Resources See the Red Hat Ceph Storage Supported configurations Knowledgebase article for all the supported Ceph configurations. 1.2.1. Preparing a new Ceph Monitor node Before you prepare a new Ceph Monitor node for deployment, review the Requirements for Installing Red Hat Ceph Storage chapter in the Red Hat Ceph Storage Installation Guide . Important Deploy each new Ceph Monitor on a separate node, and all Ceph Monitor nodes in the storage cluster must run on the same hardware. Prerequisites Network connectivity. Root-level access to the new node. Procedure Add the new node to the server rack. Connect the new node to the network. Install the latest version of Red Hat Enterprise Linux 7 or Red Hat Enterprise Linux 8. For Red Hat Enterprise Linux 7, install ntp and configure a reliable time source: For Red Hat Enterprise Linux 8, install chrony and configure a reliable time source: If using a firewall, open TCP port 6789: Additional Resources For more information about chrony , refer to Red Hat Enterprise Linux 8 Configuring basic system settings . 1.2.2. Adding a Ceph Monitor using Ansible Red Hat recommends adding two Ceph Monitors at a time to maintain an odd number of monitors. For example, if you have three Ceph Monitors in the storage cluster, Red Hat recommends that you expand the number of monitors to five. Prerequisites Root-level access to the new nodes. An Ansible administration node. A running Red Hat Ceph Storage cluster deployed by Ansible. Procedure Add the new Ceph Monitor nodes to the /etc/ansible/hosts Ansible inventory file, under a [mons] section: Example Verify that Ansible can contact the Ceph nodes: Change directory to the Ansible configuration directory: You can add a Ceph Monitor using either of the following steps: For both bare-metal and containers deployments, run the infrastructure-playbook : As the ansible user, run either the site playbook or the site-container playbook: Bare-metal deployments: Example Container deployments: Example After the Ansible playbook has finished running, the new Ceph Monitor nodes appear in the storage cluster. Update the configuration file: Bare-metal deployments: Example Container deployments: Example Additional Resources See the Configuring Ansible's inventory location section in the {storage_product} Installation Guide for more details on the Ansible inventory configuration. 1.2.3. Adding a Ceph Monitor using the command-line interface Red Hat recommends adding two Ceph Monitors at a time to maintain an odd number of monitors. For example, if you have three Ceph Monitors in the storage cluster, Red Hat recommends that you expand the number of monitors to five. Important Red Hat recommends running only one Ceph Monitor daemon per node. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a running Ceph Monitor node and to the new monitor nodes. Procedure Add the Red Hat Ceph Storage 4 monitor repository. Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Install the ceph-mon package on the new Ceph Monitor nodes: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Edit the mon_host settings list in the [mon] section of the Ceph configuration file on a running node in the storage cluster. Add the IP address of the new Ceph Monitor node to the mon_host settings list: Syntax Instead of adding the new Ceph Monitor's IP address to the [mon] section of the Ceph configuration file, you can create a specific section in the file for the new monitor nodes: Syntax Note The mon_host settings list is a list of DNS-resolvable host names or IP addresses, separated by "," or ";" or " ". This list ensures that the storage cluster identifies the new Monitor node during a start or restart. Important The mon_initial_members setting lists the initial quorum group of Ceph Monitor nodes. If one member of that group fails, another node in that group becomes the initial monitor node. To ensure high availability for production storage clusters, list at least three monitor nodes in the mon_initial_members and mon_host sections of the Ceph configuration file. This prevents the storage cluster from locking up if the initial monitor node fails. If the Monitor nodes you are adding are replacing monitors that were part of mon_initial_members and mon_host , add the new monitors to both sections as well. To make the monitors part of the initial quorum group, add the host name to the mon_initial_members parameter in the [global] section of the Ceph configuration file. Example Copy the updated Ceph configuration file to all Ceph nodes and Ceph clients: Syntax Example Create the monitor's data directory on the new monitor nodes: Syntax Example Create temporary directories on a running Ceph Monitor node and on the new monitor nodes, and keep the files needed for this procedure in those directories. The temporary directory on each node should be different from the node's default directory. It can be removed after all the steps are completed: Syntax Example Copy the admin key from a running Ceph Monitor node to the new Ceph Monitor nodes so that you can run ceph commands: Syntax Example From a running Ceph Monitor node, retrieve the monitor keyring: Syntax Example From a running Ceph Monitor node, retrieve the monitor map: Syntax Example Copy the collected Ceph Monitor data to the new Ceph Monitor nodes: Syntax Example Prepare the data directory for the new monitors from the data you collected earlier. Specify the path to the monitor map to retrieve quorum information from the monitors, along with their `fsid`s. Specify a path to the monitor keyring: Syntax Example For storage clusters with custom names, add the following line to the /etc/sysconfig/ceph file: Syntax Example Update the owner and group permissions on the new monitor nodes: Syntax Example Enable and start the ceph-mon process on the new monitor nodes: Syntax Example Additional Resources See the Enabling the Red Hat Ceph Storage Repositories section in the Red Hat Ceph Storage Installation Guide . 1.2.4. Configuring monitor election strategy The monitor election strategy identifies the net splits and handles failures. You can configure the election monitor strategy in three different modes: classic - This is the default mode in which the lowest ranked monitor is voted based on the elector module between the two sites. disallow - This mode lets you mark monitors as disallowed, in which case they will participate in the quorum and serve clients, but cannot be an elected leader. This lets you add monitors to a list of disallowed leaders. If a monitor is in the disallowed list, it will always defer to another monitor. connectivity - This mode is mainly used to resolve network discrepancies. It evaluates connection scores provided by each monitor for its peers and elects the most connected and reliable monitor to be the leader. This mode is designed to handle net splits, which may happen if your cluster is stretched across multiple data centers or otherwise susceptible. This mode incorporates connection score ratings and elects the monitor with the best score. Red Hat recommends you to stay in the classic mode unless you require features in the other modes. Before constructing the cluster, change the election_strategy to classic , disallow , or connectivity in the following command: Syntax 1.2.5. Removing a Ceph Monitor using Ansible To remove a Ceph Monitor with Ansible, use the shrink-mon.yml playbook. Prerequisites An Ansible administration node. A running Red Hat Ceph Storage cluster deployed by Ansible. Procedure Check if the monitor is ok-to-stop : Syntax Example Change to the /usr/share/ceph-ansible/ directory. For bare-metal and containers deployments, run the shrink-mon.yml Ansible playbook: Syntax Replace: NODE_NAME with the short host name of the Ceph Monitor node. You can remove only one Ceph Monitor each time the playbook runs. ANSIBLE_USER_NAME with the name of the Ansible user Example Manually remove the corresponding entry from the ansible inventory host /etc/ansible/hosts . Run the ceph-ansible playbook. Bare-metal deployments : Example Container deployments : Example Ensure that the Ceph Monitor has been successfully removed: Additional Resources For more information on installing Red Hat Ceph Storage, see the Red Hat Ceph Storage Installation Guide . See the Configuring Ansible's inventory location section in the {storage_product} Installation Guide for more details on the Ansible inventory configuration. 1.2.6. Removing a Ceph Monitor using the command-line interface Removing a Ceph Monitor involves removing a ceph-mon daemon from the storage cluster and updating the storage cluster map. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. Procedure Check if the monitor is ok-to-stop : Syntax Example Stop the Ceph Monitor service: Syntax Example Remove the Ceph Monitor from the storage cluster: Syntax Example Remove the Ceph Monitor entry from the Ceph configuration file. The default location for the configuration file is /etc/ceph/ceph.conf . Redistribute the Ceph configuration file to all remaining Ceph nodes in the storage cluster: Syntax Example For a Containers deployment, disable and remove the Ceph Monitor service: Disable the Ceph Monitor service: Syntax Example Remove the service from systemd : Reload the systemd manager configuration: Reset the state of the failed Ceph Monitor node: Optional: Archive the Ceph Monitor data: Syntax Example Optional: Delete the Ceph Monitor data: Syntax Example 1.2.7. Removing a Ceph Monitor from an unhealthy storage cluster You can remove a ceph-mon daemon from an unhealthy storage cluster that has placement groups persistently not in active + clean state. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. At least one running Ceph Monitor node. Procedure Log into a surviving Ceph Monitor node: Syntax Example Stop the ceph-mon daemon and extract a copy of the monmap file. : Syntax Example Remove the non-surviving Ceph Monitor(s): Syntax Example Inject the surviving monitor map with the removed monitor(s) into the surviving Ceph Monitor: Syntax Example Start only the surviving monitors, and verify that the monitors form a quorum: Example Optional: Archive the removed Ceph Monitor's data directory in /var/lib/ceph/mon directory. 1.3. Ceph Managers The Ceph Manager daemon ( ceph-mgr ) runs alongside monitor daemons, to provide additional monitoring and interfaces to external monitoring and management systems. The ceph-mgr daemon is required for normal operations. By default, the Ceph Manager daemon requires no additional configuration, beyond ensuring it is running. If there is no mgr daemon running, you can see a health warning to that effect, and some of the other information in the output of ceph status is missing or stale until a Ceph Manager is started. 1.3.1. Adding a Ceph Manager using Ansible Usually, the Ansible automation utility installs the Ceph Manager daemon ( ceph-mgr ) when you deploy the Red Hat Ceph Storage cluster. If the Ceph Manager service or the daemon is down, you can redeploy the ceph-mgr daemon using Ansible. You can remove the manager daemon and add a new or an existing node to deploy the Ceph Manager daemon. Red Hat recommends colocating the Ceph Manager and Ceph Monitor daemons on the same node. Prerequisites A running Red Hat Ceph Storage cluster deployed by Ansible. Root or sudo access to an Ansible administration node. New or existing nodes to deploy the Ceph Manager daemons. Procedure Log in to the Ansible administration node. Navigate to the /usr/share/ceph-ansible/ directory: Example As root or with sudo access, open and edit the /usr/share/ceph-ansible/hosts inventory file and add the Ceph Manager node under the [mgrs] section: Syntax Replace CEPH_MANAGER_NODE_NAME with the host name of the node where you want to install the Ceph Manager daemon. As the ansible user, run the Ansible playbook: Bare-metal deployments: Container deployments: After the Ansible playbook has finished running, the new Ceph Manager daemons node appears in the storage cluster. Verification On the monitor node, check the status of the storage cluster: Syntax Example Additional Resources See the Manually installing Ceph Manager section in the Red Hat Ceph Storage Installation Guide for more details on many adding a Ceph Manager daemon to a bare-metal storage cluster. See the Removing a Ceph Manager using Ansible section in the Red Hat Ceph Storage Operations Guide for more details. 1.3.2. Removing a Ceph Manager using Ansible You can use shrink-mgr playbook to remove the Ceph Manager daemons. This playbook removes a Ceph manager from your cluster. Prerequisites A running Red Hat Ceph Storage cluster deployed by Ansible. Root or sudo access to an Ansible administration node. Admin access to the Ansible administration node. Procedure As an admin user, log in to the Ansible administration node. Navigate to the /usr/share/ceph-ansible/ directory. For bare-metal and containers deployments, run the shrink-mgr.yml Ansible playbook: Syntax Replace: NODE_NAME with the short host name of the Ceph Manager node. You can remove only one Ceph Manager each time the playbook runs. ANSIBLE_USER_NAME with the name of the Ansible user Example As a root user, edit the /usr/share/ceph-ansible/hosts inventory file and remove the Ceph Manager node under the [mgrs] section: Syntax Example In this example, ceph-2 was removed from the [mgrs] list. Verification On the monitor node, check the status of the storage cluster: Syntax Example Additional Resources For more information on installing Red Hat Ceph Storage, see the Red Hat Ceph Storage Installation Guide . See the Configuring Ansible's inventory location section in the Red Hat Ceph Storage Installation Guide for more details on the Ansible inventory configuration. 1.4. Ceph MDSs The Ceph Metadata Server (MDS) node runs the MDS daemon (ceph-mds), which manages metadata related to files stored on the Ceph File System (CephFS). The MDS provides POSIX-compliant, shared file-system metadata management, including ownership, time stamps, and mode. The MDS uses RADOS (Reliable Autonomic Distributed Object Storage) to store metadata. The MDS enables CephFS to interact with the Ceph Object Store, mapping an inode to an object and the location where Ceph stores the data within a tree. Clients accessing a CephFS file system first make a request to an MDS, which provides the information needed to get file content from the correct OSDs. 1.4.1. Adding a Ceph MDS using Ansible Use the Ansible playbook to add a Ceph Metadata Server (MDS). Prerequisites A running Red Hat Ceph Storage cluster deployed by Ansible. Root or sudo access to an Ansible administration node. New or existing servers that can be provisioned as MDS nodes. Procedure Log in to the Ansible administration node Change to the /usr/share/ceph-ansible directory: Example As root or with sudo access, open and edit the /usr/share/ceph-ansible/hosts inventory file and add the MDS node under the [mdss] section: Syntax Replace NEW_MDS_NODE_NAME with the host name of the node where you want to install the MDS server. Alternatively, you can colocate the MDS daemon with the OSD daemon on one node by adding the same node under the [osds] and [mdss] sections. Example As the ansible user, run the Ansible playbook to provision the MDS node: Bare-metal deployments: Container deployments: After the Ansible playbook has finished running, the new Ceph MDS node appears in the storage cluster. Verification Check the status of the MDS daemons: Syntax Example Alternatively, you can use the ceph mds stat command to check if the MDS is in an active state: Syntax Example Additional Resources For more information on installing Red Hat Ceph Storage, see the Red Hat Ceph Storage Installation Guide . See the Removing a Ceph MDS using Ansible section in the Red Hat Ceph Storage Troubleshooting Guide for more details on removing an MDS using Ansible. 1.4.2. Adding a Ceph MDS using the command-line interface You can manually add a Ceph Metadata Server (MDS) using the command-line interface. Prerequisites The ceph-common package is installed. A running Red Hat Ceph Storage cluster. Root or sudo access to the MDS nodes. New or existing servers that can be provisioned as MDS nodes. Procedure Add a new MDS node by logging into the node and creating an MDS mount point: Syntax Replace MDS_ID with the ID of the MDS node that you want to add the MDS daemon to. Example If this is a new MDS node, create the authentication key if you are using Cephx authentication: Syntax Replace MDS_ID with the ID of the MDS node to deploy the MDS daemon on. Example Note Cephx authentication is enabled by default. See the Cephx authentication link in the Additional Resources section for more information about Cephx authentication. Start the MDS daemon: Syntax Replace HOST_NAME with the short name of the host to start the daemon. Example Enable the MDS service: Syntax Replace HOST_NAME with the short name of the host to enable the service. Example Verification Check the status of the MDS daemons: Syntax Example Alternatively, you can use the ceph mds stat command to check if the MDS is in an active state: Syntax Example Additional Resources For more information on installing Red Hat Ceph Storage, see the Red Hat Ceph Storage Installation Guide . For more information on Cephx authentication, see the Red Hat Ceph Storage Configuration Guide . See the Removing a Ceph MDS using the command line interface section in the Red Hat Ceph Storage Troubleshooting Guide for more details on removing an MDS using the command line interface. 1.4.3. Removing a Ceph MDS using Ansible To remove a Ceph Metadata Server (MDS) using Ansible, use the shrink-mds playbook. Note If there is no replacement MDS to take over once the MDS is removed, the file system will become unavailable to clients. If that is not desirable, consider adding an additional MDS before removing the MDS you would like to take offline. Prerequisites At least one MDS node. A running Red Hat Ceph Storage cluster deployed by Ansible. Root or sudo access to an Ansible administration node. Procedure Log in to the Ansible administration node. Change to the /usr/share/ceph-ansible directory: Example Run the Ansible shrink-mds.yml playbook, and when prompted, type yes to confirm shrinking the cluster: Syntax Replace ID with the ID of the MDS node you want to remove. You can remove only one Ceph MDS each time the playbook runs. Example As root or with sudo access, open and edit the /usr/share/ceph-ansible/hosts inventory file and remove the MDS node under the [mdss] section: Syntax Example In this example, node02 was removed from the [mdss] list. Verification Check the status of the MDS daemons: Syntax Example Additional Resources For more information on installing Red Hat Ceph Storage, see the Red Hat Ceph Storage Installation Guide . See the Adding a Ceph MDS using Ansible section in the Red Hat Ceph Storage Troubleshooting Guide for more details on adding an MDS using Ansible. 1.4.4. Removing a Ceph MDS using the command-line interface You can manually remove a Ceph Metadata Server (MDS) using the command-line interface. Note If there is no replacement MDS to take over once the current MDS is removed, the file system will become unavailable to clients. If that is not desirable, consider adding an MDS before removing the existing MDS. Prerequisites The ceph-common package is installed. A running Red Hat Ceph Storage cluster. Root or sudo access to the MDS nodes. Procedure Log into the Ceph MDS node that you want to remove the MDS daemon from. Stop the Ceph MDS service: Syntax Replace HOST_NAME with the short name of the host where the daemon is running. Example Disable the MDS service if you are not redeploying MDS to this node: Syntax Replace HOST_NAME with the short name of the host to disable the daemon. Example Remove the /var/lib/ceph/mds/ceph- MDS_ID directory on the MDS node: Syntax Replace MDS_ID with the ID of the MDS node that you want to remove the MDS daemon from. Example Verification Check the status of the MDS daemons: Syntax Example Additional Resources For more information on installing Red Hat Ceph Storage, see the Red Hat Ceph Storage Installation Guide . See the Adding a Ceph MDS using the command line interface section in the Red Hat Ceph Storage Troubleshooting Guide for more details on adding an MDS using the command line interface. 1.5. Ceph OSDs When a Red Hat Ceph Storage cluster is up and running, you can add OSDs to the storage cluster at runtime. A Ceph OSD generally consists of one ceph-osd daemon for one storage drive and its associated journal within a node. If a node has multiple storage drives, then map one ceph-osd daemon for each drive. Red Hat recommends checking the capacity of a cluster regularly to see if it is reaching the upper end of its storage capacity. As a storage cluster reaches its near full ratio, add one or more OSDs to expand the storage cluster's capacity. When you want to reduce the size of a Red Hat Ceph Storage cluster or replace the hardware, you can also remove an OSD at runtime. If the node has multiple storage drives, you might also need to remove one of the ceph-osd daemon for that drive. Generally, it's a good idea to check the capacity of the storage cluster to see if you are reaching the upper end of its capacity. Ensure that when you remove an OSD that the storage cluster is not at its near full ratio. Important Do not let a storage cluster reach the full ratio before adding an OSD. OSD failures that occur after the storage cluster reaches the near full ratio can cause the storage cluster to exceed the full ratio. Ceph blocks write access to protect the data until you resolve the storage capacity issues. Do not remove OSDs without considering the impact on the full ratio first. 1.5.1. Ceph OSD node configuration Configure Ceph OSDs and their supporting hardware similarly as a storage strategy for the pool(s) that will use the OSDs. Ceph prefers uniform hardware across pools for a consistent performance profile. For best performance, consider a CRUSH hierarchy with drives of the same type or size. If you add drives of dissimilar size, adjust their weights accordingly. When you add the OSD to the CRUSH map, consider the weight for the new OSD. Hard drive capacity grows approximately 40% per year, so newer OSD nodes might have larger hard drives than older nodes in the storage cluster, that is, they might have a greater weight. Before doing a new installation, review the Requirements for Installing Red Hat Ceph Storage chapter in the Installation Guide . Additional Resources See the Red Hat Ceph Storage Storage Strategies Guide for more details. * 1.5.2. Mapping a container OSD ID to a drive Sometimes, it is necessary to identify which drive a containerized OSD is using. For example, if an OSD has an issue you might need to know which drive it uses to verify the drive status. Also, for a non-containerized OSD you reference the OSD ID to start and stop it, but to start and stop a containerized OSD you reference the drive it uses. Important The examples below are running on Red Hat Enterprise Linux 8. In Red Hat Enterprise Linux 8, podman is the default service and has replaced the older docker service. If you are running on Red Hat Enterprise Linux 7, then substitute podman with docker to execute the commands given. Prerequisites A running Red Hat Ceph Storage cluster in a containerized environment. Having root access to the container node. Procedure Find a container name. For example, to identify the drive associated with osd.5 , open a terminal on the container node where osd.5 is running, and then run podman ps to list all containers: Example Use podman exec to run ceph-volume lvm list on any OSD container name from the output: Example From this output you can see osd.5 is associated with /dev/sdb . Additional Resources See Replacing a failed OSD disk for more information. 1.5.3. Adding a Ceph OSD using Ansible with the same disk topology For Ceph OSDs with the same disk topology, Ansible adds the same number of OSDs as other OSD nodes using the same device paths specified in the devices: section of the /usr/share/ceph-ansible/group_vars/osds.yml file. Note The new Ceph OSD nodes have the same configuration as the rest of the OSDs. Prerequisites A running Red Hat Ceph Storage cluster. Review the Requirements for Installing Red Hat Ceph Storage chapter in the Red Hat Ceph Storage Installation Guide . Having root access to the new nodes. The same number of OSD data drives as other OSD nodes in the storage cluster. Procedure Add the Ceph OSD node(s) to the /etc/ansible/hosts file, under the [osds] section: Syntax Verify that Ansible can reach the Ceph nodes: Navigate to the Ansible configuration directory: For bare-metal and containers deployments, run the add-osd.yml Ansible playbook: Note For a new OSD host, you need to run either the site.yml or the site-container.yml playbook with the --limit option as node-exporter and ceph-crash services are not deployed on the node with osds.yml playbook. Example For new OSD host, run the site.yml or site-container.yml Ansible playbook: Bare-metal deployments: Syntax Example Container deployments: Syntax Example Note When adding an OSD, if the playbook fails with PGs were not reported as active+clean , configure the following variables in the all.yml file to adjust the retries and delay: Additional Resources See the Configuring Ansible's inventory location section in the {storage_product} Installation Guide for more details on the Ansible inventory configuration. 1.5.4. Adding a Ceph OSD using Ansible with different disk topologies For Ceph OSDs with different disk topologies, there are two approaches for adding the new OSD node(s) to an existing storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Review the Requirements for Installing Red Hat Ceph Storage chapter in the Red Hat Ceph Storage Installation Guide . Having root access to the new nodes. Procedure First Approach Add the new Ceph OSD node(s) to the /etc/ansible/hosts file, under the [osds] section: Example Create a new file for each new Ceph OSD node added to the storage cluster, under the /etc/ansible/host_vars/ directory: Syntax Example Edit the new file, and add the devices: and dedicated_devices: sections to the file. Under each of these sections add a - , space, then the full path to the block device names for this OSD node: Example Verify that Ansible can reach all the Ceph nodes: Change directory to the Ansible configuration directory: For bare-metal and containers deployments, run the add-osd.yml Ansible playbook: Note For a new OSD host, you need to run either the site.yml or the site-container.yml playbook with the --limit option as node-exporter and ceph-crash services are not deployed on the node with osds.yml playbook. Example For new OSD host, run the site.yml or site-container.yml Ansible playbook: Bare-metal deployments: Syntax Example Container deployments: Syntax Example Second Approach Add the new OSD node name to the /etc/ansible/hosts file, and use the devices and dedicated_devices options, specifying the different disk topology: Example Verify that Ansible can reach the all Ceph nodes: Change directory to the Ansible configuration directory: For bare-metal and containers deployments, run the add-osd.yml Ansible playbook: Note For a new OSD host, you need to run either the site.yml or the site-container.yml playbook with the --limit option as node-exporter and ceph-crash services are not deployed on the node with osds.yml playbook. Example For new OSD host, run the site.yml or site-container.yml Ansible playbook: Bare-metal deployments: Syntax Example Container deployments: Syntax Example Additional Resources See the Configuring Ansible's inventory location section in the {storage_product} Installation Guide for more details on the Ansible inventory configuration. 1.5.5. Creating Ceph OSDs using ceph-volume The create subcommand calls the prepare subcommand, and then calls the activate subcommand. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD nodes. Note If you prefer to have more control over the creation process, you can use the prepare and activate subcommands separately to create the OSD, instead of using create . You can use the two subcommands to gradually introduce new OSDs into a storage cluster, while avoiding having to rebalance large amounts of data. Both approaches work the same way, except that using the create subcommand causes the OSD to become up and in immediately after completion. Procedure To create a new OSD: Syntax Example Additional Resources See the Preparing Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. See the Activating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. 1.5.6. Using batch mode with ceph-volume The batch subcommand automates the creation of multiple OSDs when single devices are provided. The ceph-volume command decides the best method to use to create the OSDs, based on drive type. Ceph OSD optimization depends on the available devices: If all devices are traditional hard drives, batch creates one OSD per device. If all devices are solid state drives, batch creates two OSDs per device. If there is a mix of traditional hard drives and solid state drives, batch uses the traditional hard drives for data, and creates the largest possible journal ( block.db ) on the solid state drive. Note The batch subcommand does not support the creation of a separate logical volume for the write-ahead-log ( block.wal ) device. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD nodes. Procedure To create OSDs on several drives: Syntax Example Additional Resources See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. 1.5.7. Adding a Ceph OSD using the command-line interface Here is the high-level workflow for manually adding an OSD to a Red Hat Ceph Storage: Install the ceph-osd package and create a new OSD instance. Prepare and mount the OSD data and journal drives. Create volume groups and logical volumes. Add the new OSD node to the CRUSH map. Update the owner and group permissions. Enable and start the ceph-osd daemon. Important The ceph-disk command is deprecated. The ceph-volume command is now the preferred method for deploying OSDs from the command-line interface. Currently, the ceph-volume command only supports the lvm plugin. Red Hat will provide examples throughout this guide using both commands as a reference, allowing time for storage administrators to convert any custom scripts that rely on ceph-disk to ceph-volume instead. Note For custom storage cluster names, use the --cluster CLUSTER_NAME option with the ceph and ceph-osd commands. Prerequisites A running Red Hat Ceph Storage cluster. Review the Requirements for Installing Red Hat Ceph Storage chapter in the Red Hat Ceph Storage Installation Guide . The root access to the new nodes. Optional. If you do not want the ceph-volume utility to create a volume group and logical volumes automatically, create them manually. See the Configuring and managing logical volumes guide for Red Hat Enterprise Linux 8. Procedure Enable the Red Hat Ceph Storage 4 OSD software repository. Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Create the /etc/ceph/ directory: On the new OSD node, copy the Ceph administration keyring and configuration files from one of the Ceph Monitor nodes: Syntax Example Install the ceph-osd package on the new Ceph OSD node: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Prepare the OSDs. To use previously created logical volumes: Syntax To specify a raw device for ceph-volume to create logical volumes automatically: Syntax See the Preparing OSDs section for more details. Set the noup option: Activate the new OSD: Syntax Example See the Activating OSDs section for more details. Note You can prepare and activate OSDs with a single command. See the Creating OSDs section for details. Alternatively, you can specify multiple drives and create OSDs with a single command. See the Using batch mode . Add the OSD to the CRUSH map. If you specify more than one bucket, the command places the OSD into the most specific bucket out of those you specified, and it moves the bucket underneath any other buckets you specified. Syntax Example Note If you specify more than one bucket, the command places the OSD into the most specific bucket out of those you specified, and it moves the bucket underneath any other buckets you specified. Note You can also edit the CRUSH map manually. See the Editing a CRUSH map section in the Red Hat Ceph Storage Storage Strategies Guide . Important If you specify only the root bucket, then the OSD attaches directly to the root, but the CRUSH rules expect OSDs to be inside of the host bucket. Unset the noup option: Update the owner and group permissions for the newly created directories: Syntax Example If you use storage clusters with custom names, then add the following line to the appropriate file: Replace CLUSTER_NAME with the custom storage cluster name. To ensure that the new OSD is up and ready to receive data, enable and start the OSD service: Syntax Example Additional Resources See the Editing a CRUSH map section in the Red Hat Ceph Storage Storage Strategies Guide for more information. See the Red Hat Ceph Storage Administration Guide , for more information on using the ceph-volume command. 1.5.8. Adding a Ceph OSD using the command-line interface in a containerized environment You can manually add a single or multiple Ceph OSD using the command-line interface in a containerized Red Hat Ceph Storage cluster. Important Red Hat recommends the use of ceph-ansible to add Ceph OSDs unless there is an exception or a specific use case where adding Ceph OSDs manually is required. If you are not sure, contact Red Hat Support . Prerequisites A running Red Hat Ceph Storage cluster in a containerized environment. Having root access to the container node. An existing OSD node. Important The examples below are running on Red Hat Enterprise Linux 8. In Red Hat Enterprise Linux 8, podman is the default service and has replaced the older docker service. If you are running on Red Hat Enterprise Linux 7, then substitute podman with docker to execute the commands given. Procedure To create a single OSD, execute the lvm prepare command: Syntax Example The example prepares a single Bluestore Ceph OSD with data on /dev/sdh . Note To enable and start the OSD, execute the following commands: Example You can also use the following optional arguments: dmcrypt Description Enable encryption for the underlying OSD devices. block.db Description Path to a bluestore block.db logical volume or partition. block.wal Description Path to a bluestore block.wal logical volume or partition. To create multiple Ceph OSDs, execute the lvm batch command: Syntax Example The example prepares multiple Bluestore Ceph OSDs with data on /dev/sde and /dev/sdf . You can also use the following optional arguments: dmcrypt Description Enable encryption for the underlying OSD devices. db-devices Description Path to a bluestore block.db logical volume or partition. wal-devices Description Path to a bluestore block.wal logical volume or partition. 1.5.9. Removing a Ceph OSD using Ansible At times, you might need to scale down the capacity of a Red Hat Ceph Storage cluster. To remove an OSD from a Red Hat Ceph Storage cluster using Ansible, run the shrink-osd.yml playbook. Important Removing an OSD from the storage cluster will destroy all the data contained on that OSD. Important Before removing OSDs, verify that the cluster has enough space to re-balance. Important Do not remove OSDs simultaneously unless you are sure the placement groups are in an active+clean state and the OSDs do not contain replicas or erasure coding shards for the same objects. If unsure, contact Red Hat Support . Prerequisites A running Red Hat Ceph Storage deployed by Ansible. A running Ansible administration node. Procedure Change to the /usr/share/ceph-ansible/ directory. Syntax Copy the admin keyring from /etc/ceph/ on the Ceph Monitor node to the node that contains the OSD that you want to remove. Run the Ansible playbook for either normal or containerized deployments of Ceph: Syntax Replace: ID with the ID of the OSD node. To remove multiple OSDs, separate the OSD IDs with a comma. ANSIBLE_USER_NAME with the name of the Ansible user. Example Verify that the OSD has been successfully removed: Syntax Additional Resources The Red Hat Ceph Storage Installation Guide . See the Configuring Ansible's inventory location section in the {storage_product} Installation Guide for more details on the Ansible inventory configuration. 1.5.10. Removing a Ceph OSD using the command-line interface Removing an OSD from a storage cluster involves these steps: * Updating the cluster map. * Removing its authentication key. * Removing the OSD from the OSD map. * Removing the OSD from the ceph.conf file. If the OSD node has multiple drives, you might need to remove an OSD for each drive by repeating this procedure for each OSD that you want to remove. Prerequisites A running Red Hat Ceph Storage cluster. Enough available OSDs so that the storage cluster is not at its near full ratio. Root-level access to the OSD node. Procedure Disable and stop the OSD service: Syntax Example Once the OSD is stopped, it is down . Remove the OSD from the storage cluster: Syntax Example Important Once the OSD has been removed, Ceph starts rebalancing and copying data to the remaining OSDs in the storage cluster. Red Hat recommends waiting until the storage cluster becomes active+clean before proceeding to the step. To observe the data migration, run the following command: Syntax Remove the OSD from the CRUSH map so that it no longer receives data. Syntax Example Note To manually remove the OSD and the bucket that contains it, you can also decompile the CRUSH map, remove the OSD from the device list, remove the device as an item in the host bucket, or remove the host bucket. If it is in the CRUSH map and you intend to remove the host, recompile the map and set it. See the instructions for decompilimg a CRUSH map in the Storage Strategies Guide for details. Remove the OSD authentication key: Syntax Example Remove the OSD: Syntax Example Edit the storage cluster's configuration file. The default name for the file is /etc/ceph/ceph.conf . Remove the OSD entry in the file, if it exists: Example Remove the reference to the OSD in the /etc/fstab file, if the OSD was added manually. Copy the updated configuration file to the /etc/ceph/ directory of all other nodes in the storage cluster. Syntax Example 1.5.11. Replacing a BlueStore database disk using the command-line interface When replacing the BlueStore DB device, block.db , that contains the BlueStore OSD's internal metadata, Red Hat supports the re-deploying of all OSDs using Ansible and the command-line interface (CLI). A corrupt block.db file will impact all OSDs which are included in that block.db files. The procedure to replace the BlueStore block.db disk, is to mark out each device in turn, wait for the data to replicate across the cluster, replace the OSD, and mark it back in again. You can retain the OSD_ID and recreate OSD with the new block.db partition on the replaced disk. Although this is a simple procedure. it requires a lot of data migration. Note If the block.db device has multiple OSDs, then follow this procedure for each of the OSDs on the block.db device. You can run ceph-volume lvm list to see block.db to block relationships. Prerequisites A running Red Hat Ceph Storage cluster. A storage device with partition. Root-level access to all the nodes. Procedure Check current Ceph cluster status on the monitor node: Identify the failed OSDs to replace: Stop and disable OSD service on OSD node: Syntax Example Set OSD out on the monitor node: Syntax Example Wait for the data to migrate off the OSD: Syntax Example Stop the OSD daemon on the OSD node: Syntax Example Make note of which device this OSD is using: Syntax Example Unmount mount point of the failed drive path on OSD node: Syntax Example Set the noout and norebalance to avoid backfilling and re-balancing: Replace the physical drive. Refer to the hardware vendor's documentation for the node. Allow the new drive to appear under the /dev/ directory and make a note of the drive path before proceeding further. Destroy OSDs on the monitor node: Syntax Example Important This step destroys the contents of the device. Ensure the data on the device is not needed and the cluster is healthy. Remove the logical volume manager on the OSD disk: Syntax Example Zap the OSD disk on OSD node: Syntax Example Recreate lvm on OSD disk: Syntax Example Create lvm on the new block.db disk: Syntax Example Recreate the OSDs on the OSD node: Syntax Example Note Red Hat recommends to use the same OSD_ID as the one destroyed in the steps. Start and enable OSD service on OSD node: Syntax Example Check the CRUSH hierarchy to ensure OSD is in the cluster: Unset noout and norebalance: Monitor cluster status until HEALTH_OK : Additional Resources See the Installing a Red Hat Ceph Storage cluster chapter in Red Hat Ceph StorageInstallation Guide for more information. 1.5.12. Observing the data migration When you add or remove an OSD to the CRUSH map, Ceph begins rebalancing the data by migrating placement groups to the new or existing OSD(s). Prerequisites A running Red Hat Ceph Storage cluster. Recently added or removed an OSD. Procedure To observe the data migration: Watch as the placement group states change from active+clean to active, some degraded objects , and finally active+clean when migration completes. To exit the utility, press Ctrl + C . 1.6. Recalculating the placement groups Placement groups (PGs) define the spread of any pool data across the available OSDs. A placement group is built upon the given redundancy algorithm to be used. For a 3-way replication, the redundancy is defined to use three different OSDs. For erasure-coded pools, the number of OSDs to use is defined by the number of chunks. Note See the KnowledgeBase article How do I increase placement group (PG) count in a Ceph Cluster for additional details. When defining a pool the number of placement groups defines the grade of granularity the data is spread with across all available OSDs. The higher the number the better the equalization of capacity load can be. However, since handling the placement groups is also important in case of reconstruction of data, the number is significant to be carefully chosen upfront. To support calculation a tool is available to produce agile environments. During lifetime of a storage cluster a pool may grow above the initially anticipated limits. With the growing number of drives a recalculation is recommended. The number of placement groups per OSD should be around 100. When adding more OSDs to the storage cluster the number of PGs per OSD will lower over time. Starting with 120 drives initially in the storage cluster and setting the pg_num of the pool to 4000 will end up in 100 PGs per OSD, given with the replication factor of three. Over time, when growing to ten times the number of OSDs, the number of PGs per OSD will go down to ten only. Because small number of PGs per OSD will tend to an unevenly distributed capacity, consider adjusting the PGs per pool. Adjusting the number of placement groups can be done online. Recalculating is not only a recalculation of the PG numbers, but will involve data relocation, which will be a lengthy process. However, the data availability will be maintained at any time. Very high numbers of PGs per OSD should be avoided, because reconstruction of all PGs on a failed OSD will start at once. A high number of IOPS is required to perform reconstruction in a timely manner, which might not be available. This would lead to deep I/O queues and high latency rendering the storage cluster unusable or will result in long healing times. Additional Resources See the PG calculator for calculating the values by a given use case. See the Erasure Code Pools chapter in the Red Hat Ceph Storage Strategies Guide for more information. 1.7. Using the Ceph Manager balancer module The balancer is a module for Ceph Manager ( ceph-mgr ) that optimizes the placement of placement groups (PGs) across OSDs in order to achieve a balanced distribution, either automatically or in a supervised fashion. Modes There are currently two supported balancer modes: crush-compat : The CRUSH compat mode uses the compat weight-set feature, introduced in Ceph Luminous, to manage an alternative set of weights for devices in the CRUSH hierarchy. The normal weights should remain set to the size of the device to reflect the target amount of data that you want to store on the device. The balancer then optimizes the weight-set values, adjusting them up or down in small increments in order to achieve a distribution that matches the target distribution as closely as possible. Because PG placement is a pseudorandom process, there is a natural amount of variation in the placement; by optimizing the weights, the balancer counter-acts that natural variation. This mode is fully backwards compatible with older clients. When an OSDMap and CRUSH map are shared with older clients, the balancer presents the optimized weights as the real weights. The primary restriction of this mode is that the balancer cannot handle multiple CRUSH hierarchies with different placement rules if the subtrees of the hierarchy share any OSDs. Because this configuration makes managing space utilization on the shared OSDs difficult, it is generally not recommended. As such, this restriction is normally not an issue. upmap : Starting with Luminous, the OSDMap can store explicit mappings for individual OSDs as exceptions to the normal CRUSH placement calculation. These upmap entries provide fine-grained control over the PG mapping. This CRUSH mode will optimize the placement of individual PGs in order to achieve a balanced distribution. In most cases, this distribution is "perfect", with an equal number of PGs on each OSD +/-1 PG, as they might not divide evenly. Important To allow use of this feature, you must tell the cluster that it only needs to support luminous or later clients with the following command: This command fails if any pre-luminous clients or daemons are connected to the monitors. Due to a known issue, kernel CephFS clients report themselves as jewel clients. To work around this issue, use the --yes-i-really-mean-it flag: You can check what client versions are in use with: Prerequisites A running Red Hat Ceph Storage cluster. Procedure Make sure that the balancer module is on: Example If the balancer module is not listed in the always_on or enabled modules, enable it: Syntax Turn on the balancer module: The default mode is crush-compat . The mode can be changed with: or Status The current status of the balancer can be checked at any time with: Automatic balancing By default, when turning on the balancer module, automatic balancing is used: The balancer can be turned back off again with: This will use the crush-compat mode, which is backward compatible with older clients and will make small changes to the data distribution over time to ensure that OSDs are equally utilized. Throttling No adjustments will be made to the PG distribution if the cluster is degraded, for example, if an OSD has failed and the system has not yet healed itself. When the cluster is healthy, the balancer throttles its changes such that the percentage of PGs that are misplaced, or need to be moved, is below a threshold of 5% by default. This percentage can be adjusted using the target_max_misplaced_ratio setting. For example, to increase the threshold to 7%: Example For automatic balancing: Set the number of seconds to sleep in between runs of the automatic balancer: Example Set the time of day to begin automatic balancing in HHMM format: Example Set the time of day to finish automatic balancing in HHMM format: Example Restrict automatic balancing to this day of the week or later. Uses the same conventions as crontab, 0 is Sunday, 1 is Monday, and so on: Example Restrict automatic balancing to this day of the week or earlier. This uses the same conventions as crontab, 0 is Sunday, 1 is Monday, and so on: Example Define the pool IDs to which the automatic balancing is limited. The default for this is an empty string, meaning all pools are balanced. The numeric pool IDs can be gotten with the ceph osd pool ls detail command: Example Supervised optimization The balancer operation is broken into a few distinct phases: Building a plan . Evaluating the quality of the data distribution, either for the current PG distribution, or the PG distribution that would result after executing a plan . Executing the plan . To evaluate and score the current distribution: To evaluate the distribution for a single pool: Syntax Example To see greater detail for the evaluation: To generate a plan using the currently configured mode: Syntax Replace PLAN_NAME with a custom plan name. Example To see the contents of a plan: Syntax Example To discard old plans: Syntax Example To see currently recorded plans use the status command: To calculate the quality of the distribution that would result after executing a plan: Syntax Example To execute the plan: Syntax Example Note Only execute the plan if is expected to improve the distribution. After execution, the plan will be discarded. 1.8. Using upmap to manually rebalance data on OSDs As a storage administrator, you can manually rebalance data on OSDs by moving selected placement groups (PGs) to specific OSDs. To perform manual rebalancing, turn off the Ceph Manager balancer module and use the upmap mode to move the PGs. Prerequisites A running Red Hat storage cluster. Root-level access to all nodes in the storage cluster. Procedure Make sure that the balancer module is on: Example If the balancer module is not listed in the always_on or enabled modules, enable it: Syntax Set the balancer mode to upmap : Syntax Turn off the balancer module: Syntax Check balancer status: Example Set the norebalance flag for the OSDs: Syntax Use the ceph pg dump pgs_brief command to list the pools in your storage cluster and the space each consumes. Use grep to search for remapped pools. Example Move the PGs to the OSDs where you want them to reside. For example, to move PG 7.ac from OSDs 8 and 3 to OSDs 3 and 37: Example Note Repeat this step to move each of the remapped PGs, one at a time. Use the ceph pg dump pgs_brief command again to check that the PGs move to the active+clean state: Example The time it takes for the PGs to move to active+clean depends on the numbers of PGs and OSDs. In addition, the number of objects misplaced depends on the value set for mgr target_max_misplaced_ratio . A higher value set for target_max_misplaced_ratio results in a greater number of misplaced objects; thus, it takes a longer time for all PGs to become active+clean . Unset the norebalance flag: Syntax Turn the balancer module back on: Syntax Once you enable the balancer module, it slowly moves the PGs back to their intended OSDs according to the CRUSH rules for the storage cluster. The balancing process might take some time, but completes eventually. 1.9. Using the Ceph Manager alerts module You can use the Ceph Manager alerts module to send simple alert messages about the Red Hat Ceph Storage cluster's health by email. Note This module is not intended to be a robust monitoring solution. The fact that it is run as part of the Ceph cluster itself is fundamentally limiting in that a failure of the ceph-mgr daemon prevents alerts from being sent. This module can, however, be useful for standalone clusters that exist in environments where existing monitoring infrastructure does not exist. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Enable the alerts module: Example Ensure the alerts module is enabled: Example Configure the Simple Mail Transfer Protocol (SMTP): Syntax Example Optional: By default, the alerts module uses SSL and port 465. To change that, set the smtp_ssl to false : Syntax Example Authenticate to the SMTP server: Syntax Example Optional: By default, SMTP From name is Ceph . To change that, set the smtp_from_name parameter: Syntax Example Optional: By default, the alerts module checks the storage cluster's health every minute, and sends a message when there is a change in the cluster health status. To change the frequency, set the interval parameter: Syntax Example In this example, the interval is set to 5 minutes. Optional: Send an alert immediately: Example Additional Resources See the Health messages of a Ceph cluster section in the Red Hat Ceph Storage Troubleshooting Guide for more information on Ceph health messages. 1.10. Using the Ceph manager crash module Using the Ceph manager crash module, you can collect information about daemon crashdumps and store it in the Red Hat Ceph Storage cluster for further analysis. By default, daemon crashdumps are dumped in /var/lib/ceph/crash . You can configure crashdumps with the option crash dir . Crash directories are named by time, date, and a randomly-generated UUID, and contain a metadata file meta and a recent log file, with a crash_id that is the same. You can use ceph-crash.service to submit these crash automatically and persist in the Ceph Monitors. The ceph-crash.service watches watches the crashdump directory and uploads them with ceph crash post . The RECENT_CRASH heath message is one of the most common health messages in a Ceph cluster. This health message means that one or more Ceph daemons has crashed recently, and the crash has not yet been archived or acknowledged by the administrator. This might indicate a software bug, a hardware problem like a failing disk, or some other problem. The option mgr/crash/warn_recent_interval controls the time period of what recent means, which is two weeks by default. You can disable the warnings by running the following command: Example The option mgr/crash/retain_interval controls the period for which you want to retain the crash reports before they are automatically purged. The default for this option is one year. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Ensure the crash module is enabled: Example Save a crash dump: The metadata file is a JSON blob stored in the crash dir as meta . You can invoke the ceph command -i - option, which reads from stdin. Example List the timestamp or the UUID crash IDs for all the new and archived crash info: Example List the timestamp or the UUID crash IDs for all the new crash information: Example List the timestamp or the UUID crash IDs for all the new crash information: Example List the summary of saved crash information grouped by age: Example View the details of the saved crash: Syntax Example Remove saved crashes older than KEEP days: Here, KEEP must be an integer. Syntax Example Archive a crash report so that it is no longer considered for the RECENT_CRASH health check and does not appear in the crash ls-new output. It appears in the crash ls . Syntax Example Archive all crash reports: Example Remove the crash dump: Syntax Example Additional Resources See the Health messages of a Ceph cluster section in the Red Hat Ceph Storage Troubleshooting Guide for more information on Ceph health messages. 1.11. Migrating RBD mirroring daemons For two-way Block device (RBD) mirroring configured using the command-line interface in a bare-metal storage cluster, the cluster does not migrate RBD mirroring. Migrate RBD mirror daemons from CLI to Ceph-Ansible prior to upgrading the storage cluster or converting the cluster to containerized. Prerequisites A running Red Hat Ceph Storage non-containerized, bare-metal, cluster. Access to the Ansible administration node. An ansible user account. Sudo access to the ansible user account. Procedure Create a user on the Ceph client node: Syntax Example Change the username in the auth file in /etc/ceph directory: Example Import the auth file to add relevant permissions: Syntax Example Check the service name of the RBD mirror node: Example Add the rbd-mirror node to the /etc/ansible/hosts file: Example 1.12. Additional Resources See the Red Hat Ceph Storage Installation Guide for details on installing the Red Hat Ceph Storage product. See the Placement Groups (PGs) chapter in the Red Hat Ceph Storage Strategies Guide for more information. See the Red Hat Enterprise Linux 8 Configuring and Managing Logical Volumes guide for more details. | [
"yum install ntp",
"dnf install chrony",
"firewall-cmd --zone=public --add-port=6789/tcp firewall-cmd --zone=public --add-port=6789/tcp --permanent",
"[mons] monitor01 monitor02 monitor03 NEW_MONITOR_NODE_NAME NEW_MONITOR_NODE_NAME",
"ansible all -m ping",
"cd /usr/share/ceph-ansible",
"ansible-playbook -vvvv -i hosts infrastructure-playbooks/add-mon.yml",
"[ansible@admin ceph-ansible]USD ansible-playbook -vvvv -i hosts site.yml --limit mons",
"[ansible@admin ceph-ansible]USD ansible-playbook -vvvv -i hosts site-container.yml --limit mons",
"[user@admin ceph-ansible]USD ansible-playbook -vvvv -i hosts site.yml --tags ceph_update_config",
"[user@admin ceph-ansible]USD ansible-playbook -vvvv -i hosts site-container.yml --tags ceph_update_config",
"subscription-manager repos --enable=rhel-7-server-rhceph-4-mon-rpms",
"subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms",
"yum install ceph-mon",
"dnf install ceph-mon",
"[mon] mon_host = MONITOR_IP : PORT MONITOR_IP : PORT ... NEW_MONITOR_IP : PORT",
"[mon. MONITOR_ID ] host = MONITOR_ID mon_addr = MONITOR_IP",
"[global] mon_initial_members = node1 node2 node3 node4 node5 [mon] mon_host = 192.168.0.1:6789 192.168.0.2:6789 192.168.0.3:6789 192.168.0.4:6789 192.168.0.5:6789 [mon.node4] host = node4 mon_addr = 192.168.0.4 [mon.node5] host = node5 mon_addr = 192.168.0.5",
"scp /etc/ceph/ CLUSTER_NAME .conf TARGET_NODE_NAME :/etc/ceph",
"scp /etc/ceph/ceph.conf node4:/etc/ceph",
"mkdir /var/lib/ceph/mon/ CLUSTER_NAME - MONITOR_ID",
"mkdir /var/lib/ceph/mon/ceph-node4",
"mkdir TEMP_DIRECTORY_PATH_NAME",
"mkdir /tmp/ceph",
"scp /etc/ceph/ CLUSTER_NAME .client.admin.keyring TARGET_NODE_NAME :/etc/ceph",
"scp /etc/ceph/ceph.client.admin.keyring node4:/etc/ceph",
"ceph auth get mon. -o TEMP_DIRECTORY_PATH_NAME / KEY_FILE_NAME",
"ceph auth get mon. -o /tmp/ceph/ceph_keyring.out",
"ceph mon getmap -o TEMP_DIRECTORY_PATH_NAME / MONITOR_MAP_FILE",
"ceph mon getmap -o /tmp/ceph/ceph_mon_map.out",
"scp /tmp/ceph TARGET_NODE_NAME :/tmp/ceph",
"scp /tmp/ceph node4:/tmp/ceph",
"ceph-mon -i MONITOR_ID --mkfs --monmap TEMP_DIRECTORY_PATH_NAME / MONITOR_MAP_FILE --keyring TEMP_DIRECTORY_PATH_NAME / KEY_FILE_NAME",
"ceph-mon -i node4 --mkfs --monmap /tmp/ceph/ceph_mon_map.out --keyring /tmp/ceph/ceph_keyring.out",
"echo \"CLUSTER= CUSTOM_CLUSTER_NAME \" >> /etc/sysconfig/ceph",
"echo \"CLUSTER=example\" >> /etc/sysconfig/ceph",
"chown -R OWNER : GROUP DIRECTORY_PATH",
"chown -R ceph:ceph /var/lib/ceph/mon chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph",
"systemctl enable ceph-mon.target systemctl enable ceph-mon@ MONITOR_ID systemctl start ceph-mon@ MONITOR_ID",
"systemctl enable ceph-mon.target systemctl enable ceph-mon@node4 systemctl start ceph-mon@node4",
"ceph mon set election_strategy {classic|disallow|connectivity}",
"ceph mon ok-to-stop MONITOR_ID",
"ceph mon ok-to-stop node03",
"[user@admin ~]USD cd /usr/share/ceph-ansible",
"ansible-playbook infrastructure-playbooks/shrink-mon.yml -e mon_to_kill= NODE_NAME -u ANSIBLE_USER_NAME -i hosts",
"[user@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/shrink-mon.yml -e mon_to_kill=node03 -u user -i hosts",
"[user@admin ceph-ansible]USD ansible-playbook site.yml --tags ceph_update_config -i hosts",
"[user@admin ceph-ansible]USD ansible-playbook site-container.yml --tags ceph_update_config -i hosts",
"ceph -s",
"ceph mon ok-to-stop HOSTNAME",
"ceph mon ok-to-stop node03",
"systemctl stop ceph-mon@ MONITOR_ID",
"systemctl stop ceph-mon@node3",
"ceph mon remove MONITOR_ID",
"ceph mon remove node3",
"scp /etc/ceph/ CLUSTER_NAME .conf USER_NAME @ TARGET_NODE_NAME :/etc/ceph/",
"scp /etc/ceph/ceph.conf root@node3:/etc/ceph/",
"systemctl disable ceph-mon@ MONITOR_ID",
"systemctl disable ceph-mon@node3",
"rm /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"systemctl reset-failed",
"mv /var/lib/ceph/mon/ CLUSTER_NAME - MONITOR_ID /var/lib/ceph/mon/removed- CLUSTER_NAME - MONITOR_ID",
"mv /var/lib/ceph/mon/ceph-node3 /var/lib/ceph/mon/removed-ceph-node3",
"rm -r /var/lib/ceph/mon/ CLUSTER_NAME - MONITOR_ID",
"rm -r /var/lib/ceph/mon/ceph-node3",
"ssh root@ MONITOR_NODE_NAME",
"ssh root@mon2",
"systemctl stop ceph-mon@ MONITOR_ID ceph-mon -i SHORT_HOSTNAME --extract-monmap TEMP_PATH",
"systemctl stop ceph-mon@mon1 ceph-mon -i mon1 --extract-monmap /tmp/monmap",
"monmaptool TEMPORARY_PATH --rm _MONITOR_ID",
"monmaptool /tmp/monmap --rm mon1",
"ceph-mon -i SHORT_HOSTNAME --inject-monmap TEMP_PATH",
"ceph-mon -i mon2 --inject-monmap /tmp/monmap",
"ceph -s",
"[ansible@admin ~]USD cd /usr/share/ceph-ansible/",
"[mgrs] CEPH_MANAGER_NODE_NAME CEPH_MANAGER_NODE_NAME",
"[ansible@admin ceph-ansible]USD ansible-playbook site.yml --limit mgrs -i hosts",
"[ansible@admin ceph-ansible]USD ansible-playbook site-container.yml --limit mgrs -i hosts",
"ceph -s",
"ceph -s mgr: ceph-3(active, since 2h), standbys: ceph-1, ceph-2",
"[admin@admin ~]USD cd /usr/share/ceph-ansible/",
"ansible-playbook infrastructure-playbooks/shrink-mgr.yml -e mgr_to_kill= NODE_NAME -u ANSIBLE_USER_NAME -i hosts",
"[admin@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/shrink-mgr.yml -e mgr_to_kill=ceph-2 -u admin -i hosts",
"[mgrs] CEPH_MANAGER_NODE_NAME CEPH_MANAGER_NODE_NAME",
"[mgrs] ceph-1 ceph-3",
"ceph -s",
"ceph -s mgr: ceph-3(active, since 112s), standbys: ceph-1",
"[ansible@admin ~]USD cd /usr/share/ceph-ansible",
"[mdss] MDS_NODE_NAME NEW_MDS_NODE_NAME",
"[mdss] node01 node03",
"[ansible@admin ceph-ansible]USD ansible-playbook site.yml --limit mdss -i hosts",
"[ansible@admin ceph-ansible]USD ansible-playbook site-container.yml --limit mdss -i hosts",
"ceph fs dump",
"[ansible@admin ceph-ansible]USD ceph fs dump [mds.node01 {0:115304} state up:active seq 5 addr [v2:172.25.250.10:6800/695510951,v1:172.25.250.10:6801/695510951]] Standby daemons: [mds.node03 {-1:144437} state up:standby seq 2 addr [v2:172.25.250.11:6800/172950087,v1:172.25.250.11:6801/172950087]]",
"ceph mds stat",
"[ansible@admin ceph-ansible]USD ceph mds stat cephfs:1 {0=node01=up:active} 1 up:standby",
"sudo mkdir /var/lib/ceph/mds/ceph- MDS_ID",
"[admin@node03 ~]USD sudo mkdir /var/lib/ceph/mds/ceph-node03",
"sudo ceph auth get-or-create mds. MDS_ID mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph- MDS_ID /keyring",
"[admin@node03 ~]USD sudo ceph auth get-or-create mds.node03 mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph-node03/keyring",
"sudo systemctl start ceph-mds@ HOST_NAME",
"[admin@node03 ~]USD sudo systemctl start ceph-mds@node03",
"systemctl enable ceph-mds@ HOST_NAME",
"[admin@node03 ~]USD sudo systemctl enable ceph-mds@node03",
"ceph fs dump",
"[admin@mon]USD ceph fs dump [mds.node01 {0:115304} state up:active seq 5 addr [v2:172.25.250.10:6800/695510951,v1:172.25.250.10:6801/695510951]] Standby daemons: [mds.node03 {-1:144437} state up:standby seq 2 addr [v2:172.25.250.11:6800/172950087,v1:172.25.250.11:6801/172950087]]",
"ceph mds stat",
"[ansible@admin ceph-ansible]USD ceph mds stat cephfs:1 {0=node01=up:active} 1 up:standby",
"[ansible@admin ~]USD cd /usr/share/ceph-ansible",
"ansible-playbook infrastructure-playbooks/shrink-mds.yml -e mds_to_kill= ID -i hosts",
"[ansible @admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/shrink-mds.yml -e mds_to_kill=node02 -i hosts",
"[mdss] MDS_NODE_NAME MDS_NODE_NAME",
"[mdss] node01 node03",
"ceph fs dump",
"[ansible@admin ceph-ansible]USD ceph fs dump [mds.node01 {0:115304} state up:active seq 5 addr [v2:172.25.250.10:6800/695510951,v1:172.25.250.10:6801/695510951]] Standby daemons: [mds.node03 {-1:144437} state up:standby seq 2 addr [v2:172.25.250.11:6800/172950087,v1:172.25.250.11:6801/172950087]]",
"sudo systemctl stop ceph-mds@ HOST_NAME",
"[admin@node02 ~]USD sudo systemctl stop ceph-mds@node02",
"sudo systemctl disable ceph-mds@ HOST_NAME",
"[admin@node02 ~]USD sudo systemctl disable ceph-mds@node02",
"sudo rm -fr /var/lib/ceph/mds/ceph- MDS_ID",
"[admin@node02 ~]USD sudo rm -fr /var/lib/ceph/mds/ceph-node02",
"ceph fs dump",
"[ansible@admin ceph-ansible]USD ceph fs dump [mds.node01 {0:115304} state up:active seq 5 addr [v2:172.25.250.10:6800/695510951,v1:172.25.250.10:6801/695510951]] Standby daemons: [mds.node03 {-1:144437} state up:standby seq 2 addr [v2:172.25.250.11:6800/172950087,v1:172.25.250.11:6801/172950087]]",
"podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a866f927b74 registry.redhat.io/rhceph/rhceph-4-rhel8:latest \"/entrypoint.sh\" About an hour ago Up About an hour ceph-osd-ceph3-sdd 4e242d932c32 registry.redhat.io/rhceph/rhceph-4-rhel8:latest \"/entrypoint.sh\" About an hour ago Up About an hour ceph-osd-ceph3-sdc 91f3d4829079 registry.redhat.io/rhceph/rhceph-4-rhel8:latest \"/entrypoint.sh\" 22 hours ago Up 22 hours ceph-osd-ceph3-sdb 73dfe4021a49 registry.redhat.io/rhceph/rhceph-4-rhel8:latest \"/entrypoint.sh\" 7 days ago Up 7 days ceph-osd-ceph3-sdf 90f6d756af39 registry.redhat.io/rhceph/rhceph-4-rhel8:latest \"/entrypoint.sh\" 7 days ago Up 7 days ceph-osd-ceph3-sde e66d6e33b306 registry.redhat.io/rhceph/rhceph-4-rhel8:latest \"/entrypoint.sh\" 7 days ago Up 7 days ceph-mgr-ceph3 733f37aafd23 registry.redhat.io/rhceph/rhceph-4-rhel8:latest \"/entrypoint.sh\" 7 days ago Up 7 days ceph-mon-ceph3",
"podman exec ceph-osd-ceph3-sdb ceph-volume lvm list ====== osd.5 ======= [journal] /dev/journals/journal1 journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs osd id 1 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd type journal osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ journal device /dev/journals/journal1 data device /dev/test_group/data-lv2 devices /dev/sda [data] /dev/test_group/data-lv2 journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs osd id 1 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd type data osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ journal device /dev/journals/journal1 data device /dev/test_group/data-lv2 devices /dev/sdb",
"[osds] osd06 NEW_OSD_NODE_NAME",
"[user@admin ~]USD ansible all -m ping",
"[user@admin ~]USD cd /usr/share/ceph-ansible",
"[user@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/add-osd.yml -i hosts",
"ansible-playbook site.yml -i hosts --limit NEW_OSD_NODE_NAME",
"[user@admin ceph-ansible]USD ansible-playbook site.yml -i hosts --limit node03",
"ansible-playbook site-container.yml -i hosts --limit NEW_OSD_NODE_NAME",
"[user@admin ceph-ansible]USD ansible-playbook site-container.yml -i hosts --limit node03",
"OSD handler checks handler_health_osd_check_retries: 50 handler_health_osd_check_delay: 30",
"[osds] osd06 NEW_OSD_NODE_NAME",
"touch /etc/ansible/host_vars/ NEW_OSD_NODE_NAME",
"touch /etc/ansible/host_vars/osd07",
"devices: - /dev/sdc - /dev/sdd - /dev/sde - /dev/sdf dedicated_devices: - /dev/sda - /dev/sda - /dev/sdb - /dev/sdb",
"[user@admin ~]USD ansible all -m ping",
"[user@admin ~]USD cd /usr/share/ceph-ansible",
"[user@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/add-osd.yml -i hosts",
"ansible-playbook site.yml -i hosts --limit NEW_OSD_NODE_NAME",
"[user@admin ceph-ansible]USD ansible-playbook site.yml -i hosts --limit node03",
"ansible-playbook site-container.yml -i hosts --limit NEW_OSD_NODE_NAME",
"[user@admin ceph-ansible]USD ansible-playbook site-container.yml -i hosts --limit node03",
"[osds] osd07 devices=\"['/dev/sdc', '/dev/sdd', '/dev/sde', '/dev/sdf']\" dedicated_devices=\"['/dev/sda', '/dev/sda', '/dev/sdb', '/dev/sdb']\"",
"[user@admin ~]USD ansible all -m ping",
"[user@admin ~]USD cd /usr/share/ceph-ansible",
"[user@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/add-osd.yml -i hosts",
"ansible-playbook site.yml -i hosts --limit NEW_OSD_NODE_NAME",
"[user@admin ceph-ansible]USD ansible-playbook site.yml -i hosts --limit node03",
"ansible-playbook site-container.yml -i hosts --limit NEW_OSD_NODE_NAME",
"[user@admin ceph-ansible]USD ansible-playbook site-container.yml -i hosts --limit node03",
"ceph-volume lvm create --bluestore --data VOLUME_GROUP / LOGICAL_VOLUME",
"ceph-volume lvm create --bluestore --data example_vg/data_lv",
"ceph-volume lvm batch --bluestore PATH_TO_DEVICE [ PATH_TO_DEVICE ]",
"ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/nvme0n1",
"subscription-manager repos --enable=rhel-7-server-rhceph-4-osd-rpms",
"subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms",
"mkdir /etc/ceph",
"scp USER_NAME @ MONITOR_HOST_NAME :/etc/ceph/ CLUSTER_NAME .client.admin.keyring /etc/ceph scp USER_NAME @ MONITOR_HOST_NAME :/etc/ceph/ CLUSTER_NAME .conf /etc/ceph",
"scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ scp root@node1:/etc/ceph/ceph.conf /etc/ceph/",
"yum install ceph-osd",
"dnf install ceph-osd",
"ceph-volume lvm prepare --bluestore --data VOLUME_GROUP / LOGICAL_VOLUME",
"ceph-volume lvm prepare --bluestore --data /PATH_TO_DEVICE",
"ceph osd set noup",
"ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm activate --bluestore 4 6cc43680-4f6e-4feb-92ff-9c7ba204120e",
"ceph osd crush add OSD_ID WEIGHT [ BUCKET_TYPE = BUCKET_NAME ...]",
"ceph osd crush add 4 1 host=node4",
"ceph osd unset noup",
"chown -R OWNER : GROUP PATH_TO_DIRECTORY",
"chown -R ceph:ceph /var/lib/ceph/osd chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph",
"echo \"CLUSTER= CLUSTER_NAME \" >> /etc/sysconfig/ceph",
"systemctl enable ceph-osd@ OSD_ID systemctl start ceph-osd@ OSD_ID",
"systemctl enable ceph-osd@4 systemctl start ceph-osd@4",
"run --rm --net=host --privileged=true --pid=host --ipc=host -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -v /run/lvm/:/run/lvm/ --entrypoint=ceph-volume PATH_TO_IMAGE --cluster CLUSTER_NAME lvm prepare --bluestore --data PATH_TO_DEVICE --no-systemd",
"podman run --rm --net=host --privileged=true --pid=host --ipc=host -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -v /run/lvm/:/run/lvm/ --entrypoint=ceph-volume registry.redhat.io/rhceph/rhceph-4-rhel8:latest --cluster ceph lvm prepare --bluestore --data /dev/sdh --no-systemd",
"systemctl enable ceph-osd@4 systemctl start ceph-osd@4",
"run --rm --net=host --privileged=true --pid=host --ipc=host -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -v /run/lvm/:/run/lvm/ --entrypoint=ceph-volume PATH_TO_IMAGE --cluster CLUSTER_NAME lvm batch --bluestore --yes --prepare _PATH_TO_DEVICE PATH_TO_DEVICE --no-systemd",
"podman run --rm --net=host --privileged=true --pid=host --ipc=host -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -v /run/lvm/:/run/lvm/ --entrypoint=ceph-volume registry.redhat.io/rhceph/rhceph-4-rhel8:latest --cluster ceph lvm batch --bluestore --yes --prepare /dev/sde /dev/sdf --no-systemd",
"[user@admin ~]USD cd /usr/share/ceph-ansible",
"ansible-playbook infrastructure-playbooks/shrink-osd.yml -e osd_to_kill= ID -u ANSIBLE_USER_NAME -i hosts",
"[user@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/shrink-osd.yml -e osd_to_kill=1 -u user -i hosts",
"ceph osd tree",
"systemctl disable ceph-osd@ OSD_ID systemctl stop ceph-osd@ OSD_ID",
"systemctl disable ceph-osd@4 systemctl stop ceph-osd@4",
"ceph osd out OSD_ID",
"ceph osd out 4",
"ceph -w",
"ceph osd crush remove OSD_NAME",
"ceph osd crush remove osd.4",
"ceph auth del osd. OSD_ID",
"ceph auth del osd.4",
"ceph osd rm OSD_ID",
"ceph osd rm 4",
"[osd.4] host = _HOST_NAME_",
"scp /etc/ceph/ CLUSTER_NAME .conf USER_NAME @ HOST_NAME :/etc/ceph/",
"scp /etc/ceph/ceph.conf root@node4:/etc/ceph/",
"ceph status ceph df",
"ceph osd tree | grep -i down",
"systemctl disable ceph-osd@ OSD_ID systemctl stop ceph-osd@ OSD_ID",
"systemctl stop ceph-osd@1 systemctl disable ceph-osd@1",
"ceph osd out OSD_ID",
"ceph osd out 1",
"while ! ceph osd safe-to-destroy OSD_ID ; do sleep 60 ; done",
"while ! ceph osd safe-to-destroy 1 ; do sleep 60 ; done",
"systemctl kill ceph-osd@ OSD_ID",
"systemctl kill ceph-osd@1",
"mount | grep /var/lib/ceph/osd/ceph- OSD_ID",
"mount | grep /var/lib/ceph/osd/ceph-1",
"umount /var/lib/ceph/osd/ CLUSTER_NAME - OSD_ID",
"umount /var/lib/ceph/osd/ceph-1",
"ceph osd set noout ceph osd set norebalance",
"ceph osd destroy OSD_ID --yes-i-really-mean-it",
"ceph osd destroy 1 --yes-i-really-mean-it",
"lvremove /dev/ VOLUME_GROUP / LOGICAL_VOLUME vgremove VOLUME_GROUP pvremove /dev/ DEVICE",
"lvremove /dev/data-vg1/data-lv1 vgremove data-vg1 pvremove /dev/sdb",
"ceph-volume lvm zap DEVICE",
"ceph-volume lvm zap /dev/sdb",
"pvcreate /dev/ DEVICE vgcreate VOLUME_GROUP /dev/ DEVICE lvcreate -l SIZE -n LOGICAL_VOLUME VOLUME_GROUP",
"pvcreate /dev/sdb vgcreate data-vg1 /dev/sdb lvcreate -l 100%FREE -n data-lv1 data-vg1",
"pvcreate /dev/ DEVICE vgcreate VOLUME_GROUP_DATABASE /dev/ DEVICE lvcreate -Ll SIZE -n LOGICAL_VOLUME_DATABASE VOLUME_GROUP_DATABASE",
"pvcreate /dev/sdb vgcreate db-vg1 /dev/sdb lvcreate -l 100%FREE -n lv-db1 db-vg1",
"ceph-volume lvm create --bluestore --osd-id OSD_ID --data VOLUME_GROUP / LOGICAL_VOLUME --block.db VOLUME_GROUP_DATABASE / LOGICAL_VOLUME_DATABASE",
"ceph-volume lvm create --bluestore --osd-id 1 --data data-vg1/data-lv1 --block.db db-vg1/db-lv1",
"systemctl start ceph-osd@ OSD_ID systemctl enable ceph-osd@ OSD_ID",
"systemctl start ceph-osd@1 systemctl enable ceph-osd@1",
"ceph osd tree",
"ceph osd unset noout ceph osd unset norebalance",
"watch -n2 ceph -s",
"ceph -w",
"ceph osd set-require-min-compat-client luminous",
"ceph osd set-require-min-compat-client luminous --yes-i-really-mean-it",
"ceph features",
"ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator_cli\", \"progress\", \"rbd_support\", \"status\", \"volumes\" ], \"enabled_modules\": [ \"dashboard\", \"pg_autoscaler\", \"prometheus\" ],",
"ceph mgr module enable balancer",
"ceph balancer on",
"ceph balancer mode upmap",
"ceph balancer mode crush-compat",
"ceph balancer status",
"ceph balancer on",
"ceph balancer off",
"ceph config set mgr target_max_misplaced_ratio .07",
"ceph config set mgr mgr/balancer/sleep_interval 60",
"ceph config set mgr mgr/balancer/begin_time 0000",
"ceph config set mgr mgr/balancer/end_time 2359",
"ceph config set mgr mgr/balancer/begin_weekday 0",
"ceph config set mgr mgr/balancer/end_weekday 6",
"ceph config set mgr mgr/balancer/pool_ids 1,2,3",
"ceph balancer eval",
"ceph balancer eval POOL_NAME",
"ceph balancer eval rbd",
"ceph balancer eval-verbose",
"ceph balancer optimize PLAN_NAME",
"ceph balancer optimize rbd_123",
"ceph balancer show PLAN_NAME",
"ceph balancer show rbd_123",
"ceph balancer rm PLAN_NAME",
"ceph balancer rm rbd_123",
"ceph balancer status",
"ceph balancer eval PLAN_NAME",
"ceph balancer eval rbd_123",
"ceph balancer execute PLAN_NAME",
"ceph balancer execute rbd_123",
"ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator_cli\", \"progress\", \"rbd_support\", \"status\", \"volumes\" ], \"enabled_modules\": [ \"dashboard\", \"pg_autoscaler\", \"prometheus\" ],",
"ceph mgr module enable balancer",
"ceph balancer mode upmap",
"ceph balancer off",
"ceph balancer status { \"plans\": [], \"active\": false, \"last_optimize_started\": \"\", \"last_optimize_duration\": \"\", \"optimize_result\": \"\", \"mode\": \"upmap\" }",
"ceph osd set norebalance",
"ceph pg dump pgs_brief PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY dumped pgs_brief 7.270 active+remapped+backfilling [8,48,61] 8 [46,48,61] 46 7.1e7 active+remapped+backfilling [73,64,74] 73 [18,64,74] 18 7.1c1 active+remapped+backfilling [29,14,8] 29 [29,14,24] 29 7.17f active+remapped+backfilling [73,71,50] 73 [50,71,69] 50 7.16c active+remapped+backfilling [66,8,4] 66 [66,4,57] 66 7.13d active+remapped+backfilling [73,27,56] 73 [27,56,35] 27 7.130 active+remapped+backfilling [53,47,73] 53 [53,47,72] 53 9.e0 active+remapped+backfilling [8,75,14] 8 [14,75,58] 14 7.db active+remapped+backfilling [10,57,60] 10 [10,60,50] 10 9.7 active+remapped+backfilling [26,69,38] 26 [26,38,41] 26 7.4a active+remapped+backfilling [73,10,76] 73 [10,76,29] 10 9.9a active+remapped+backfilling [20,15,73] 20 [20,15,29] 20 7.ac active+remapped+backfilling [8,74,3] 8 [3,74,37] 3 9.c2 active+remapped+backfilling [57,75,7] 57 [4,75,7] 4 7.34d active+remapped+backfilling [23,46,73] 23 [23,46,56] 23 7.36a active+remapped+backfilling [40,32,8] 40 [40,32,44] 40",
"PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY dumped pgs_brief 7.ac active+remapped+backfilling [8,74,3] 8 [3,74,37] 3 ceph osd pg-upmap-items 7.ac 8 3 3 37 7.ac active+clean [3,74,37] 8 [3,74,37] 3",
"ceph pg dump pgs_brief PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY dumped pgs_brief 7.270 active+clean [8,48,61] 8 [46,48,61] 46 7.1e7 active+clean [73,64,74] 73 [18,64,74] 18 7.1c1 active+clean [29,14,8] 29 [29,14,24] 29 7.17f active+clean [73,71,50] 73 [50,71,69] 50 7.16c active+clean [66,8,4] 66 [66,4,57] 66 7.13d active+clean [73,27,56] 73 [27,56,35] 27 7.130 active+clean [53,47,73] 53 [53,47,72] 53 9.e0 active+clean [8,75,14] 8 [14,75,58] 14 7.db active+clean [10,57,60] 10 [10,60,50] 10 9.7 active+clean [26,69,38] 26 [26,38,41] 26 7.4a active+clean [73,10,76] 73 [10,76,29] 10 9.9a active+clean [20,15,73] 20 [20,15,29] 20 7.ac active+clean [3,74,37] 8 [3,74,37] 3 9.c2 active+clean [57,75,7] 57 [4,75,7] 4 7.34d active+clean [23,46,73] 23 [23,46,56] 23 7.36a active+clean [40,32,8] 40 [40,32,44] 40",
"ceph osd unset norebalance",
"ceph balancer on",
"ceph mgr module enable alerts",
"ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator_cli\", \"progress\", \"rbd_support\", \"status\", \"volumes\" ], \"enabled_modules\": [ \"alerts\", \"dashboard\", \"pg_autoscaler\", \"nfs\", \"prometheus\", \"restful\" ]",
"ceph config set mgr mgr/alerts/smtp_host SMTP_SERVER ceph config set mgr mgr/alerts/smtp_destination RECEIVER_EMAIL_ADDRESS ceph config set mgr mgr/alerts/smtp_sender SENDER_EMAIL_ADDRESS",
"ceph config set mgr mgr/alerts/smtp_host smtp.example.com ceph config set mgr mgr/alerts/smtp_destination [email protected] ceph config set mgr mgr/alerts/smtp_sender [email protected]",
"ceph config set mgr mgr/alerts/smtp_ssl false ceph config set mgr mgr/alerts/smtp_port PORT_NUMBER",
"ceph config set mgr mgr/alerts/smtp_ssl false ceph config set mgr mgr/alerts/smtp_port 587",
"ceph config set mgr mgr/alerts/smtp_user USERNAME ceph config set mgr mgr/alerts/smtp_password PASSWORD",
"ceph config set mgr mgr/alerts/smtp_user admin1234 ceph config set mgr mgr/alerts/smtp_password admin1234",
"ceph config set mgr mgr/alerts/smtp_from_name CLUSTER_NAME",
"ceph config set mgr mgr/alerts/smtp_from_name 'Ceph Cluster Test'",
"ceph config set mgr mgr/alerts/interval INTERVAL",
"ceph config set mgr mgr/alerts/interval \"5m\"",
"ceph alerts send",
"ceph config set mgr/crash/warn_recent_interval 0",
"ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator_cli\", \"progress\", \"rbd_support\", \"status\", \"volumes\" ], \"enabled_modules\": [ \"dashboard\", \"pg_autoscaler\", \"prometheus\" ]",
"ceph crash post -i meta",
"ceph crash ls",
"ceph crash ls-new",
"ceph crash ls-new",
"ceph crash stat 8 crashes recorded 8 older than 1 days old: 2021-05-20T08:30:14.533316Z_4ea88673-8db6-4959-a8c6-0eea22d305c2 2021-05-20T08:30:14.590789Z_30a8bb92-2147-4e0f-a58b-a12c2c73d4f5 2021-05-20T08:34:42.278648Z_6a91a778-bce6-4ef3-a3fb-84c4276c8297 2021-05-20T08:34:42.801268Z_e5f25c74-c381-46b1-bee3-63d891f9fc2d 2021-05-20T08:34:42.803141Z_96adfc59-be3a-4a38-9981-e71ad3d55e47 2021-05-20T08:34:42.830416Z_e45ed474-550c-44b3-b9bb-283e3f4cc1fe 2021-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d 2021-05-24T19:58:44.315282Z_1847afbc-f8a9-45da-94e8-5aef0738954e",
"ceph crash info CRASH_ID",
"ceph crash info 2021-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d { \"assert_condition\": \"session_map.sessions.empty()\", \"assert_file\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc\", \"assert_func\": \"virtual Monitor::~Monitor()\", \"assert_line\": 287, \"assert_msg\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: In function 'virtual Monitor::~Monitor()' thread 7f67a1aeb700 time 2021-05-24T19:58:42.545485+0000\\n/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: 287: FAILED ceph_assert(session_map.sessions.empty())\\n\", \"assert_thread_name\": \"ceph-mon\", \"backtrace\": [ \"/lib64/libpthread.so.0(+0x12b30) [0x7f679678bb30]\", \"gsignal()\", \"abort()\", \"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x7f6798c8d37b]\", \"/usr/lib64/ceph/libceph-common.so.2(+0x276544) [0x7f6798c8d544]\", \"(Monitor::~Monitor()+0xe30) [0x561152ed3c80]\", \"(Monitor::~Monitor()+0xd) [0x561152ed3cdd]\", \"main()\", \"__libc_start_main()\", \"_start()\" ], \"ceph_version\": \"14.1.0-486.el8cp\", \"crash_id\": \"2021-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d\", \"entity_name\": \"mon.ceph-adm4\", \"os_id\": \"rhel\", \"os_name\": \"Red Hat Enterprise Linux\", \"os_version\": \"8.3 (Ootpa)\", \"os_version_id\": \"8.3\", \"process_name\": \"ceph-mon\", \"stack_sig\": \"957c21d558d0cba4cee9e8aaf9227b3b1b09738b8a4d2c9f4dc26d9233b0d511\", \"timestamp\": \"2021-05-24T19:58:42.549073Z\", \"utsname_hostname\": \"host02\", \"utsname_machine\": \"x86_64\", \"utsname_release\": \"4.18.0-240.15.1.el8_3.x86_64\", \"utsname_sysname\": \"Linux\", \"utsname_version\": \"#1 SMP Wed Feb 3 03:12:15 EST 2021\" }",
"ceph crash prune KEEP",
"ceph crash prune 60",
"ceph crash archive CRASH_ID",
"ceph crash archive 2021-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d",
"ceph crash archive-all",
"ceph crash rm CRASH_ID",
"ceph crash rm 2021-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d",
"ceph auth get client.PRIMARY_CLUSTER_NAME -o /etc/ceph/ceph.PRIMARY_CLUSTER_NAME.keyring",
"ceph auth get client.rbd-mirror.site-a -o /etc/ceph/ceph.client.rbd-mirror.site-a.keyring",
"[client.rbd-mirror.rbd-client-site-a] key = AQCbKbVg+E7POBAA7COSZCodvOrg2LWIFc9+3g== caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"",
"ceph auth import -i PATH_TO_KEYRING",
"ceph auth import -i /etc/ceph/ceph.client.rbd-mirror.rbd-client-site-a.keyring",
"systemctl list-units --all systemctl stop [email protected] systemctl disable [email protected] systemctl reset-failed [email protected] systemctl start [email protected] systemctl enable [email protected] systemctl status [email protected]",
"[rbdmirrors] ceph.client.rbd-mirror.rbd-client-site-a"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/operations_guide/managing-the-storage-cluster-size |
6.2. Web UI: Using the Topology Graph to Manage Replication Topology | 6.2. Web UI: Using the Topology Graph to Manage Replication Topology Accessing the Topology Graph The topology graph in the web UI shows the relationships between the servers in the domain: Select IPA Server Topology Topology Graph . If you make any changes to the topology that are not immediately reflected in the graph, click Refresh . Customizing the Topology View You can move individual topology nodes by dragging the mouse: Figure 6.3. Moving Topology Graph Nodes You can zoom in and zoom out the topology graph using the mouse wheel: Figure 6.4. Zooming the Topology Graph You can move the canvas of the topology graph by holding the left mouse button: Figure 6.5. Moving the Topology Graph Canvas Interpreting the Topology Graph Servers joined in a domain replication agreement are connected by an orange arrow. Servers joined in a CA replication agreement are connected by a blue arrow. Topology graph example: recommended topology Figure 6.6, "Recommended Topology Example" shows one of the possible recommended topologies for four servers: each server is connected to at least two other servers, and more than one server is a CA master. Figure 6.6. Recommended Topology Example Topology graph example: discouraged topology In Figure 6.7, "Discouraged Topology Example: Single Point of Failure" , server1 is a single point of failure. All the other servers have replication agreements with this server, but not with any of the other servers. Therefore, if server1 fails, all the other servers will become isolated. Avoid creating topologies like this. Figure 6.7. Discouraged Topology Example: Single Point of Failure For details on topology recommendations, see Section 4.2, "Deployment Considerations for Replicas" . 6.2.1. Setting up Replication Between Two Servers In the topology graph, hover your mouse over one of the server nodes. Figure 6.8. Domain or CA Options Click on the domain or the ca part of the circle depending on what type of topology segment you want to create. A new arrow representing the new replication agreement appears under your mouse pointer. Move your mouse to the other server node, and click on it. Figure 6.9. Creating a New Segment In the Add Topology Segment window, click Add to confirm the properties of the new segment. IdM creates a new topology segment between the two servers, which joins them in a replication agreement. The topology graph now shows the updated replication topology: Figure 6.10. New Segment Created 6.2.2. Stopping Replication Between Two Servers Click on an arrow representing the replication agreement you want to remove. This highlights the arrow. Figure 6.11. Topology Segment Highlighted Click Delete . In the Confirmation window, click OK . IdM removes the topology segment between the two servers, which deletes their replication agreement. The topology graph now shows the updated replication topology: Figure 6.12. Topology Segment Deleted | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-topology-graph-ui |
Chapter 3. Distribution of content in RHEL 8 | Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.0_release_notes/Distribution-of-content-in-RHEL-8 |
Chapter 2. Understanding API compatibility guidelines | Chapter 2. Understanding API compatibility guidelines Important This guidance does not cover layered OpenShift Container Platform offerings. 2.1. API compatibility guidelines Red Hat recommends that application developers adopt the following principles in order to improve compatibility with OpenShift Container Platform: Use APIs and components with support tiers that match the application's need. Build applications using the published client libraries where possible. Applications are only guaranteed to run correctly if they execute in an environment that is as new as the environment it was built to execute against. An application that was built for OpenShift Container Platform 4.14 is not guaranteed to function properly on OpenShift Container Platform 4.13. Do not design applications that rely on configuration files provided by system packages or other components. These files can change between versions unless the upstream community is explicitly committed to preserving them. Where appropriate, depend on any Red Hat provided interface abstraction over those configuration files in order to maintain forward compatibility. Direct file system modification of configuration files is discouraged, and users are strongly encouraged to integrate with an Operator provided API where available to avoid dual-writer conflicts. Do not depend on API fields prefixed with unsupported<FieldName> or annotations that are not explicitly mentioned in product documentation. Do not depend on components with shorter compatibility guarantees than your application. Do not perform direct storage operations on the etcd server. All etcd access must be performed via the api-server or through documented backup and restore procedures. Red Hat recommends that application developers follow the compatibility guidelines defined by Red Hat Enterprise Linux (RHEL). OpenShift Container Platform strongly recommends the following guidelines when building an application or hosting an application on the platform: Do not depend on a specific Linux kernel or OpenShift Container Platform version. Avoid reading from proc , sys , and debug file systems, or any other pseudo file system. Avoid using ioctls to directly interact with hardware. Avoid direct interaction with cgroups in order to not conflict with OpenShift Container Platform host-agents that provide the container execution environment. Note During the lifecycle of a release, Red Hat makes commercially reasonable efforts to maintain API and application operating environment (AOE) compatibility across all minor releases and z-stream releases. If necessary, Red Hat might make exceptions to this compatibility goal for critical impact security or other significant issues. 2.2. API compatibility exceptions The following are exceptions to compatibility in OpenShift Container Platform: RHEL CoreOS file system modifications not made with a supported Operator No assurances are made at this time that a modification made to the host operating file system is preserved across minor releases except for where that modification is made through the public interface exposed via a supported Operator, such as the Machine Config Operator or Node Tuning Operator. Modifications to cluster infrastructure in cloud or virtualized environments No assurances are made at this time that a modification to the cloud hosting environment that supports the cluster is preserved except for where that modification is made through a public interface exposed in the product or is documented as a supported configuration. Cluster infrastructure providers are responsible for preserving their cloud or virtualized infrastructure except for where they delegate that authority to the product through an API. Functional defaults between an upgraded cluster and a new installation No assurances are made at this time that a new installation of a product minor release will have the same functional defaults as a version of the product that was installed with a prior minor release and upgraded to the equivalent version. For example, future versions of the product may provision cloud infrastructure with different defaults than prior minor versions. In addition, different default security choices may be made in future versions of the product than those made in past versions of the product. Past versions of the product will forward upgrade, but preserve legacy choices where appropriate specifically to maintain backwards compatibility. Usage of API fields that have the prefix "unsupported" or undocumented annotations Select APIs in the product expose fields with the prefix unsupported<FieldName> . No assurances are made at this time that usage of this field is supported across releases or within a release. Product support can request a customer to specify a value in this field when debugging specific problems, but its usage is not supported outside of that interaction. Usage of annotations on objects that are not explicitly documented are not assured support across minor releases. API availability per product installation topology The OpenShift distribution will continue to evolve its supported installation topology, and not all APIs in one install topology will necessarily be included in another. For example, certain topologies may restrict read/write access to particular APIs if they are in conflict with the product installation topology or not include a particular API at all if not pertinent to that topology. APIs that exist in a given topology will be supported in accordance with the compatibility tiers defined above. 2.3. API compatibility common terminology 2.3.1. Application Programming Interface (API) An API is a public interface implemented by a software program that enables it to interact with other software. In OpenShift Container Platform, the API is served from a centralized API server and is used as the hub for all system interaction. 2.3.2. Application Operating Environment (AOE) An AOE is the integrated environment that executes the end-user application program. The AOE is a containerized environment that provides isolation from the host operating system (OS). At a minimum, AOE allows the application to run in an isolated manner from the host OS libraries and binaries, but still share the same OS kernel as all other containers on the host. The AOE is enforced at runtime and it describes the interface between an application and its operating environment. It includes intersection points between the platform, operating system and environment, with the user application including projection of downward API, DNS, resource accounting, device access, platform workload identity, isolation among containers, isolation between containers and host OS. The AOE does not include components that might vary by installation, such as Container Network Interface (CNI) plugin selection or extensions to the product such as admission hooks. Components that integrate with the cluster at a level below the container environment might be subjected to additional variation between versions. 2.3.3. Compatibility in a virtualized environment Virtual environments emulate bare-metal environments such that unprivileged applications that run on bare-metal environments will run, unmodified, in corresponding virtual environments. Virtual environments present simplified abstracted views of physical resources, so some differences might exist. 2.3.4. Compatibility in a cloud environment OpenShift Container Platform might choose to offer integration points with a hosting cloud environment via cloud provider specific integrations. The compatibility of these integration points are specific to the guarantee provided by the native cloud vendor and its intersection with the OpenShift Container Platform compatibility window. Where OpenShift Container Platform provides an integration with a cloud environment natively as part of the default installation, Red Hat develops against stable cloud API endpoints to provide commercially reasonable support with forward looking compatibility that includes stable deprecation policies. Example areas of integration between the cloud provider and OpenShift Container Platform include, but are not limited to, dynamic volume provisioning, service load balancer integration, pod workload identity, dynamic management of compute, and infrastructure provisioned as part of initial installation. 2.3.5. Major, minor, and z-stream releases A Red Hat major release represents a significant step in the development of a product. Minor releases appear more frequently within the scope of a major release and represent deprecation boundaries that might impact future application compatibility. A z-stream release is an update to a minor release which provides a stream of continuous fixes to an associated minor release. API and AOE compatibility is never broken in a z-stream release except when this policy is explicitly overridden in order to respond to an unforeseen security impact. For example, in the release 4.13.2: 4 is the major release version 13 is the minor release version 2 is the z-stream release version 2.3.6. Extended user support (EUS) A minor release in an OpenShift Container Platform major release that has an extended support window for critical bug fixes. Users are able to migrate between EUS releases by incrementally adopting minor versions between EUS releases. It is important to note that the deprecation policy is defined across minor releases and not EUS releases. As a result, an EUS user might have to respond to a deprecation when migrating to a future EUS while sequentially upgrading through each minor release. 2.3.7. Developer Preview An optional product capability that is not officially supported by Red Hat, but is intended to provide a mechanism to explore early phase technology. By default, Developer Preview functionality is opt-in, and subject to removal at any time. Enabling a Developer Preview feature might render a cluster unsupportable dependent upon the scope of the feature. If you are a Red( )Hat customer or partner and have feedback about these developer preview versions, file an issue by using the OpenShift Bugs tracker . Do not use the formal Red( )Hat support service ticket process. You can read more about support handling in the following knowledge article . 2.3.8. Technology Preview An optional product capability that provides early access to upcoming product innovations to test functionality and provide feedback during the development process. The feature is not fully supported, might not be functionally complete, and is not intended for production use. Usage of a Technology Preview function requires explicit opt-in. Learn more about the Technology Preview Features Support Scope . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/api_overview/compatibility-guidelines |
Operator Guide | Operator Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/operator_guide/index |
Chapter 7. References | Chapter 7. References 7.1. Red Hat Configuring and managing high availability clusters Support Policies for RHEL High Availability Clusters Support Policies for RHEL High Availability Clusters - Fencing/STONITH Support Policies for RHEL High Availability Clusters - Management of SAP S/4HANA Support Policies for RHEL High Availability Clusters - Management of SAP NetWeaver in a Cluster Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications How to enable the SAP HA Interface for SAP ABAP application server instances managed by the RHEL HA Add-On? How to manage standalone SAP Web Dispatcher instances using the RHEL HA Add-On The Systemd-Based SAP Startup Framework 7.2. SAP SAP Note 1552925 - Linux: High Availability Cluster Solutions SAP Note 1693245 - SAP HA Script Connector Library SAP Note 1908655 - Support details for Red Hat Enterprise Linux HA Add-On SAP Note 2630416 - Support for Standalone Enqueue Server 2 SAP Note 2641322 - Installation of ENSA2 and update from ENSA1 to ENSA2 when using the Red Hat HA solutions for SAP SAP Note 2772999 - Red Hat Enterprise Linux 8.x: Installation and Configuration Standalone Enqueue Server | SAP Help Portal Setting up Enqueue Replication Server Fail over | SAP Blogs High Availability with the Standalone Enqueue Server Evolution of ENSA2 and ERS2... | SAP Blogs | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/asmb_ref_v8-configuring-clusters-to-manage |
Planning your environment | Planning your environment OpenShift Dedicated 4 An overview of planning for Dedicated 4 Red Hat OpenShift Documentation Team | [
"ocm login --token <token> 1",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"autoscaling:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"iam:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"elasticloadbalancing:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"cloudwatch:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"events:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"logs:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"support:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"kms:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"sts:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"tag:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"route53:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"servicequotas:ListServices\", \"servicequotas:GetRequestedServiceQuotaChange\", \"servicequotas:GetServiceQuota\", \"servicequotas:RequestServiceQuotaIncrease\", \"servicequotas:ListServiceQuotas\" ], \"Resource\": [ \"*\" ] } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": \"*\", \"Resource\": \"*\", \"Effect\": \"Allow\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AttachVpnGateway\", \"ec2:DescribeVpnConnections\", \"ec2:AcceptVpcPeeringConnection\", \"ec2:DeleteVpcPeeringConnection\", \"ec2:DescribeVpcPeeringConnections\", \"ec2:CreateVpnConnectionRoute\", \"ec2:RejectVpcPeeringConnection\", \"ec2:DetachVpnGateway\", \"ec2:DeleteVpnConnectionRoute\", \"ec2:DeleteVpnGateway\", \"ec2:DescribeVpcs\", \"ec2:CreateVpnGateway\", \"ec2:ModifyVpcPeeringConnectionOptions\", \"ec2:DeleteVpnConnection\", \"ec2:CreateVpcPeeringConnection\", \"ec2:DescribeVpnGateways\", \"ec2:CreateVpnConnection\", \"ec2:DescribeRouteTables\", \"ec2:CreateTags\", \"ec2:CreateRoute\", \"directconnect:*\" ], \"Resource\": \"*\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"aws-portal:ViewAccount\", \"aws-portal:ViewBilling\" ], \"Resource\": \"*\" } ] }"
]
| https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/planning_your_environment/index |
Chapter 1. Introduction | Chapter 1. Introduction Before using this guide to configure JBoss EAP, it is assumed that the latest version of JBoss EAP has been downloaded and installed. For installation instructions, see the JBoss EAP Installation Guide . Important Since the installation location of JBoss EAP will vary between host machines, this guide refers to the installation location as EAP_HOME . The actual location of the JBoss EAP installation should be used instead of EAP_HOME when performing administrative tasks. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/introduction |
Chapter 5. View OpenShift Data Foundation Topology | Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_amazon_web_services/viewing-odf-topology_mcg-verify |
Chapter 94. MavenArtifact schema reference | Chapter 94. MavenArtifact schema reference Used in: Plugin The type property is a discriminator that distinguishes use of the MavenArtifact type from JarArtifact , TgzArtifact , ZipArtifact , OtherArtifact . It must have the value maven for the type MavenArtifact . Property Property type Description repository string Maven repository to download the artifact from. Applicable to the maven artifact type only. group string Maven group id. Applicable to the maven artifact type only. artifact string Maven artifact id. Applicable to the maven artifact type only. version string Maven version number. Applicable to the maven artifact type only. insecure boolean By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifacts will be downloaded, even when the server is considered insecure. type string Must be maven . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-mavenartifact-reference |
Chapter 2. Serving models on the single-model serving platform | Chapter 2. Serving models on the single-model serving platform For deploying large models such as large language models (LLMs), Red Hat OpenShift AI includes a single model serving platform that is based on the KServe component. Because each model is deployed from its own model server, the single model serving platform helps you to deploy, monitor, scale, and maintain large models that require increased resources. 2.1. About the single-model serving platform For deploying large models such as large language models (LLMs), OpenShift AI includes a single-model serving platform that is based on the KServe component. Because each model is deployed on its own model server, the single-model serving platform helps you to deploy, monitor, scale, and maintain large models that require increased resources. 2.2. Components KServe : A Kubernetes custom resource definition (CRD) that orchestrates model serving for all types of models. KServe includes model-serving runtimes that implement the loading of given types of model servers. KServe also handles the lifecycle of the deployment object, storage access, and networking setup. Red Hat OpenShift Serverless : A cloud-native development model that allows for serverless deployments of models. OpenShift Serverless is based on the open source Knative project. Red Hat OpenShift Service Mesh : A service mesh networking layer that manages traffic flows and enforces access policies. OpenShift Service Mesh is based on the open source Istio project. 2.3. Installation options To install the single-model serving platform, you have the following options: Automated installation If you have not already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you can configure the Red Hat OpenShift AI Operator to install KServe and configure its dependencies. For more information about automated installation, see Configuring automated installation of KServe . Manual installation If you have already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you cannot configure the Red Hat OpenShift AI Operator to install KServe and configure its dependencies. In this situation, you must install KServe manually. For more information about manual installation, see Manually installing KServe . 2.4. Authorization You can add Authorino as an authorization provider for the single-model serving platform. Adding an authorization provider allows you to enable token authentication for models that you deploy on the platform, which ensures that only authorized parties can make inference requests to the models. To add Authorino as an authorization provider on the single-model serving platform, you have the following options: If automated installation of the single-model serving platform is possible on your cluster, you can include Authorino as part of the automated installation process. If you need to manually install the single-model serving platform, you must also manually configure Authorino. For guidance on choosing an installation option for the single-model serving platform, see Installation options . 2.5. Monitoring You can configure monitoring for the single-model serving platform and use Prometheus to scrape metrics for each of the pre-installed model-serving runtimes. 2.6. Model-serving runtimes You can serve models on the single-model serving platform by using model-serving runtimes. The configuration of a model-serving runtime is defined by the ServingRuntime and InferenceService custom resource definitions (CRDs). 2.6.1. ServingRuntime The ServingRuntime CRD creates a serving runtime, an environment for deploying and managing a model. It creates the templates for pods that dynamically load and unload models of various formats and also exposes a service endpoint for inferencing requests. The following YAML configuration is an example of the vLLM ServingRuntime for KServe model-serving runtime. The configuration includes various flags, environment variables and command-line arguments. 1 The recommended accelerator to use with the runtime. 2 The name with which the serving runtime is displayed. 3 The endpoint used by Prometheus to scrape metrics for monitoring. 4 The port used by Prometheus to scrape metrics for monitoring. 5 The path to where the model files are stored in the runtime container. 6 Passes the model name that is specified by the {{.Name}} template variable inside the runtime container specification to the runtime environment. The {{.Name}} variable maps to the spec.predictor.name field in the InferenceService metadata object. 7 The entrypoint command that starts the runtime container. 8 The runtime container image used by the serving runtime. This image differs depending on the type of accelerator used. 9 Specifies that the runtime is used for single-model serving. 10 Specifies the model formats supported by the runtime. 2.6.2. InferenceService The InferenceService CRD creates a server or inference service that processes inference queries, passes it to the model, and then returns the inference output. The inference service also performs the following actions: Specifies the location and format of the model. Specifies the serving runtime used to serve the model. Enables the passthrough route for gRPC or REST inference. Defines HTTP or gRPC endpoints for the deployed model. The following example shows the InferenceService YAML configuration file that is generated when deploying a granite model with the vLLM runtime: Additional resources Serving Runtimes 2.7. Supported model-serving runtimes OpenShift AI includes several preinstalled model-serving runtimes. You can use preinstalled model-serving runtimes to start serving models without modifying or defining the runtime yourself. You can also add a custom runtime to support a model. For help adding a custom runtime, see Adding a custom model-serving runtime for the single-model serving platform . Table 2.1. Model-serving runtimes Name Description Exported model format Caikit Text Generation Inference Server (Caikit-TGIS) ServingRuntime for KServe (1) A composite runtime for serving models in the Caikit format Caikit Text Generation Caikit Standalone ServingRuntime for KServe (2) A runtime for serving models in the Caikit embeddings format for embeddings tasks Caikit Embeddings OpenVINO Model Server A scalable, high-performance runtime for serving models that are optimized for Intel architectures PyTorch, TensorFlow, OpenVINO IR, PaddlePaddle, MXNet, Caffe, Kaldi Text Generation Inference Server (TGIS) Standalone ServingRuntime for KServe (3) A runtime for serving TGI-enabled models PyTorch Model Formats vLLM ServingRuntime for KServe A high-throughput and memory-efficient inference and serving runtime for large language models Supported models vLLM ServingRuntime with Gaudi accelerators support for KServe A high-throughput and memory-efficient inference and serving runtime that supports Intel Gaudi accelerators Supported models vLLM ROCm ServingRuntime for KServe A high-throughput and memory-efficient inference and serving runtime that supports AMD GPU accelerators Supported models The composite Caikit-TGIS runtime is based on Caikit and Text Generation Inference Server (TGIS) . To use this runtime, you must convert your models to Caikit format. For an example, see Converting Hugging Face Hub models to Caikit format in the caikit-tgis-serving repository. The Caikit Standalone runtime is based on Caikit NLP . To use this runtime, you must convert your models to the Caikit embeddings format. For an example, see Tests for text embedding module . Text Generation Inference Server (TGIS) is based on an early fork of Hugging Face TGI . Red Hat will continue to develop the standalone TGIS runtime to support TGI models. If a model is incompatible in the current version of OpenShift AI, support might be added in a future version. In the meantime, you can also add your own custom runtime to support a TGI model. For more information, see Adding a custom model-serving runtime for the single-model serving platform . Table 2.2. Deployment requirements Name Default protocol Additonal protocol Model mesh support Single node OpenShift support Deployment mode Caikit Text Generation Inference Server (Caikit-TGIS) ServingRuntime for KServe REST gRPC No Yes Raw and serverless Caikit Standalone ServingRuntime for KServe REST gRPC No Yes Raw and serverless OpenVINO Model Server REST None Yes Yes Raw and serverless Text Generation Inference Server (TGIS) Standalone ServingRuntime for KServe gRPC None No Yes Raw and serverless vLLM ServingRuntime for KServe REST None No Yes Raw and serverless vLLM ServingRuntime with Gaudi accelerators support for KServe REST None No Yes Raw and serverless vLLM ROCm ServingRuntime for KServe REST None No Yes Raw and serverless Additional resources Inference endpoints 2.8. Tested and verified model-serving runtimes Tested and verified runtimes are community versions of model-serving runtimes that have been tested and verified against specific versions of OpenShift AI. Red Hat tests the current version of a tested and verified runtime each time there is a new version of OpenShift AI. If a new version of a tested and verified runtime is released in the middle of an OpenShift AI release cycle, it will be tested and verified in an upcoming release. A list of the tested and verified runtimes and compatible versions is available in the OpenShift AI release notes . Note Tested and verified runtimes are not directly supported by Red Hat. You are responsible for ensuring that you are licensed to use any tested and verified runtimes that you add, and for correctly configuring and maintaining them. For more information, see Tested and verified runtimes in OpenShift AI . Table 2.3. Model-serving runtimes Name Description Exported model format NVIDIA Triton Inference Server An open-source inference-serving software for fast and scalable AI in applications. TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more Table 2.4. Deployment requirements Name Default protocol Additonal protocol Model mesh support Single node OpenShift support Deployment mode NVIDIA Triton Inference Server gRPC REST Yes Yes Raw and serverless Additional resources Inference endpoints 2.9. Inference endpoints These examples show how to use inference endpoints to query the model. Note If you enabled token authentication when deploying the model, add the Authorization header and specify a token value. 2.9.1. Caikit TGIS ServingRuntime for KServe :443/api/v1/task/text-generation :443/api/v1/task/server-streaming-text-generation Example command 2.9.2. Caikit Standalone ServingRuntime for KServe If you are serving multiple models, you can query /info/models or :443 caikit.runtime.info.InfoService/GetModelsInfo to view a list of served models. REST endpoints /api/v1/task/embedding /api/v1/task/embedding-tasks /api/v1/task/sentence-similarity /api/v1/task/sentence-similarity-tasks /api/v1/task/rerank /api/v1/task/rerank-tasks /info/models /info/version /info/runtime gRPC endpoints :443 caikit.runtime.Nlp.NlpService/EmbeddingTaskPredict :443 caikit.runtime.Nlp.NlpService/EmbeddingTasksPredict :443 caikit.runtime.Nlp.NlpService/SentenceSimilarityTaskPredict :443 caikit.runtime.Nlp.NlpService/SentenceSimilarityTasksPredict :443 caikit.runtime.Nlp.NlpService/RerankTaskPredict :443 caikit.runtime.Nlp.NlpService/RerankTasksPredict :443 caikit.runtime.info.InfoService/GetModelsInfo :443 caikit.runtime.info.InfoService/GetRuntimeInfo Note By default, the Caikit Standalone Runtime exposes REST endpoints. To use gRPC protocol, manually deploy a custom Caikit Standalone ServingRuntime. For more information, see Adding a custom model-serving runtime for the single-model serving platform . An example manifest is available in the caikit-tgis-serving GitHub repository . REST gRPC 2.9.3. TGIS Standalone ServingRuntime for KServe :443 fmaas.GenerationService/Generate :443 fmaas.GenerationService/GenerateStream Note To query the endpoint for the TGIS standalone runtime, you must also download the files in the proto directory of the OpenShift AI text-generation-inference repository. Example command 2.9.4. OpenVINO Model Server /v2/models/<model-name>/infer Example command 2.9.5. vLLM ServingRuntime for KServe :443/version :443/docs :443/v1/models :443/v1/chat/completions :443/v1/completions :443/v1/embeddings :443/tokenize :443/detokenize Note The vLLM runtime is compatible with the OpenAI REST API. For a list of models that the vLLM runtime supports, see Supported models . To use the embeddings inference endpoint in vLLM, you must use an embeddings model that the vLLM supports. You cannot use the embeddings endpoint with generative models. For more information, see Supported embeddings models in vLLM . As of vLLM v0.5.5, you must provide a chat template while querying a model using the /v1/chat/completions endpoint. If your model does not include a predefined chat template, you can use the chat-template command-line parameter to specify a chat template in your custom vLLM runtime, as shown in the example. Replace <CHAT_TEMPLATE> with the path to your template. You can use the chat templates that are available as .jinja files here or with the vLLM image under /app/data/template . For more information, see Chat templates . As indicated by the paths shown, the single-model serving platform uses the HTTPS port of your OpenShift router (usually port 443) to serve external API requests. Example command 2.9.6. vLLM ServingRuntime with Gaudi accelerators support for KServe See vLLM ServingRuntime for KServe . 2.9.7. vLLM ROCm ServingRuntime for KServe See vLLM ServingRuntime for KServe . 2.9.8. NVIDIA Triton Inference Server REST endpoints v2/models/[/versions/<model_version>]/infer v2/models/<model_name>[/versions/<model_version>] v2/health/ready v2/health/live v2/models/<model_name>[/versions/]/ready v2 Note ModelMesh does not support the following REST endpoints: v2/health/live v2/health/ready v2/models/<model_name>[/versions/]/ready Example command gRPC endpoints :443 inference.GRPCInferenceService/ModelInfer :443 inference.GRPCInferenceService/ModelReady :443 inference.GRPCInferenceService/ModelMetadata :443 inference.GRPCInferenceService/ServerReady :443 inference.GRPCInferenceService/ServerLive :443 inference.GRPCInferenceService/ServerMetadata Example command 2.9.9. Additional resources Text Generation Inference Server (TGIS) Caikit API documentation Caikit NLP GitHub project OpenVINO KServe-compatible REST API documentation OpenAI API documentation Open Inference Protocol Supported model-serving runtimes . 2.10. About KServe deployment modes By default, you can deploy models on the single-model serving platform with KServe by using Red Hat OpenShift Serverless , which is a cloud-native development model that allows for serverless deployments of models. OpenShift Serverless is based on the open source Knative project. In addition, serverless mode is dependent on the Red Hat OpenShift Serverless Operator. Alternatively, you can use raw deployment mode, which is not dependent on the Red Hat OpenShift Serverless Operator. With raw deployment mode, you can deploy models with Kubernetes resources, such as Deployment , Service , Ingress , and Horizontal Pod Autoscaler . Important Deploying a machine learning model using KServe raw deployment mode is a Limited Availability feature. Limited Availability means that you can install and receive support for the feature only with specific approval from the Red Hat AI Business Unit. Without such approval, the feature is unsupported. In addition, this feature is only supported on Self-Managed deployments of single node OpenShift. There are both advantages and disadvantages to using each of these deployment modes: 2.10.1. Serverless mode Advantages: Enables autoscaling based on request volume: Resources scale up automatically when receiving incoming requests. Optimizes resource usage and maintains performance during peak times. Supports scale down to and from zero using Knative: Allows resources to scale down completely when there are no incoming requests. Saves costs by not running idle resources. Disadvantages: Has customization limitations: Serverless is limited to Knative, such as when mounting multiple volumes. Dependency on Knative for scaling: Introduces additional complexity in setup and management compared to traditional scaling methods. 2.10.2. Raw deployment mode Advantages: Enables deployment with Kubernetes resources, such as Deployment , Service , Ingress , and Horizontal Pod Autoscaler : Provides full control over Kubernetes resources, allowing for detailed customization and configuration of deployment settings. Unlocks Knative limitations, such as being unable to mount multiple volumes: Beneficial for applications requiring complex configurations or multiple storage mounts. Disadvantages: Does not support automatic scaling: Does not support automatic scaling down to zero resources when idle. Might result in higher costs during periods of low traffic. Requires manual management of scaling. 2.11. Deploying models on single node OpenShift using KServe raw deployment mode You can deploy a machine learning model by using KServe raw deployment mode on single node OpenShift. Raw deployment mode offers several advantages over Knative, such as the ability to mount multiple volumes. Important Deploying a machine learning model using KServe raw deployment mode on single node OpenShift is a Limited Availability feature. Limited Availability means that you can install and receive support for the feature only with specific approval from the Red Hat AI Business Unit. Without such approval, the feature is unsupported. Prerequisites You have logged in to Red Hat OpenShift AI. You have cluster administrator privileges for your OpenShift cluster. You have created an OpenShift cluster that has a node with at least 4 CPUs and 16 GB memory. You have installed the Red Hat OpenShift AI (RHOAI) Operator. You have installed the OpenShift command-line interface (CLI). For more information about installing the OpenShift command-line interface (CLI), see Getting started with the OpenShift CLI . You have installed KServe. You have access to S3-compatible object storage. For the model that you want to deploy, you know the associated folder path in your S3-compatible object storage bucket. To use the Caikit-TGIS runtime, you have converted your model to Caikit format. For an example, see Converting Hugging Face Hub models to Caikit format in the caikit-tgis-serving repository. If you want to use graphics processing units (GPUs) with your model server, you have enabled GPU support in OpenShift AI. If you use NVIDIA GPUs, see Enabling NVIDIA GPUs . If you use AMD GPUs, see AMD GPU integration . To use the vLLM runtime, you have enabled GPU support in OpenShift AI and have installed and configured the Node Feature Discovery operator on your cluster. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . Procedure Open a command-line terminal and log in to your OpenShift cluster as cluster administrator: By default, OpenShift uses a service mesh for network traffic management. Because KServe raw deployment mode does not require a service mesh, disable Red Hat OpenShift Service Mesh: Enter the following command to disable Red Hat OpenShift Service Mesh: In the YAML editor, change the value of managementState for the serviceMesh component to Removed as shown: Save the changes. Create a project: For information about creating projects, see Working with projects . Create a data science cluster: In the Red Hat OpenShift web console Administrator view, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. Click the Create DataScienceCluster button. In the Configure via field, click the YAML view radio button. In the spec.components section of the YAML editor, configure the kserve component as shown: Click Create . Create a secret file: At your command-line terminal, create a YAML file to contain your secret and add the following YAML code: Important If you are deploying a machine learning model in a disconnected deployment, add serving.kserve.io/s3-verifyssl: '0' to the metadata.annotations section. Save the file with the file name secret.yaml . Apply the secret.yaml file: Create a service account: Create a YAML file to contain your service account and add the following YAML code: For information about service accounts, see Understanding and creating service accounts . Save the file with the file name serviceAccount.yaml . Apply the serviceAccount.yaml file: Create a YAML file for the serving runtime to define the container image that will serve your model predictions. Here is an example using the OpenVino Model Server: If you are using the OpenVINO Model Server example above, ensure that you insert the correct values required for any placeholders in the YAML code. Save the file with an appropriate file name. Apply the file containing your serving run time: Create an InferenceService custom resource (CR). Create a YAML file to contain the InferenceService CR. Using the OpenVINO Model Server example used previously, here is the corresponding YAML code: In your YAML code, ensure the following values are set correctly: serving.kserve.io/deploymentMode must contain the value RawDeployment . modelFormat must contain the value for your model format, such as onnx . storageUri must contain the value for your model s3 storage directory, for example s3://<bucket_name>/<model_directory_path> . runtime must contain the value for the name of your serving runtime, for example, ovms-runtime . Save the file with an appropriate file name. Apply the file containing your InferenceService CR: Verify that all pods are running in your cluster: Example output: After you verify that all pods are running, forward the service port to your local machine: Ensure that you replace <namespace> , <pod-name> , <local_port> , <remote_port> (this is the model server port, for example, 8888 ) with values appropriate to your deployment. Verification Use your preferred client library or tool to send requests to the localhost inference URL. 2.12. Deploying models by using the single-model serving platform On the single-model serving platform, each model is deployed on its own model server. This helps you to deploy, monitor, scale, and maintain large models that require increased resources. Important If you want to use the single-model serving platform to deploy a model from S3-compatible storage that uses a self-signed SSL certificate, you must install a certificate authority (CA) bundle on your OpenShift cluster. For more information, see Working with certificates (OpenShift AI Self-Managed) or Working with certificates (OpenShift AI Self-Managed in a disconnected environment). 2.12.1. Enabling the single-model serving platform When you have installed KServe, you can use the Red Hat OpenShift AI dashboard to enable the single-model serving platform. You can also use the dashboard to enable model-serving runtimes for the platform. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You have installed KServe. Your cluster administrator has not edited the OpenShift AI dashboard configuration to disable the ability to select the single-model serving platform, which uses the KServe component. For more information, see Dashboard configuration options . Procedure Enable the single-model serving platform as follows: In the left menu, click Settings Cluster settings . Locate the Model serving platforms section. To enable the single-model serving platform for projects, select the Single-model serving platform checkbox. Click Save changes . Enable preinstalled runtimes for the single-model serving platform as follows: In the left menu of the OpenShift AI dashboard, click Settings Serving runtimes . The Serving runtimes page shows preinstalled runtimes and any custom runtimes that you have added. For more information about preinstalled runtimes, see Supported runtimes . Set the runtime that you want to use to Enabled . The single-model serving platform is now available for model deployments. 2.12.2. Adding a custom model-serving runtime for the single-model serving platform A model-serving runtime adds support for a specified set of model frameworks and the model formats supported by those frameworks. You can use the pre-installed runtimes that are included with OpenShift AI. You can also add your own custom runtimes if the default runtimes do not meet your needs. For example, if the TGIS runtime does not support a model format that is supported by Hugging Face Text Generation Inference (TGI) , you can create a custom runtime to add support for the model. As an administrator, you can use the OpenShift AI interface to add and enable a custom model-serving runtime. You can then choose the custom runtime when you deploy a model on the single-model serving platform. Note Red Hat does not provide support for custom runtimes. You are responsible for ensuring that you are licensed to use any custom runtimes that you add, and for correctly configuring and maintaining them. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You have built your custom runtime and added the image to a container image repository such as Quay . Procedure From the OpenShift AI dashboard, click Settings > Serving runtimes . The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled. To add a custom runtime, choose one of the following options: To start with an existing runtime (for example, TGIS Standalone ServingRuntime for KServe ), click the action menu (...) to the existing runtime and then click Duplicate . To add a new custom runtime, click Add serving runtime . In the Select the model serving platforms this runtime supports list, select Single-model serving platform . In the Select the API protocol this runtime supports list, select REST or gRPC . Optional: If you started a new runtime (rather than duplicating an existing one), add your code by choosing one of the following options: Upload a YAML file Click Upload files . In the file browser, select a YAML file on your computer. The embedded YAML editor opens and shows the contents of the file that you uploaded. Enter YAML code directly in the editor Click Start from scratch . Enter or paste YAML code directly in the embedded editor. Note In many cases, creating a custom runtime will require adding new or custom parameters to the env section of the ServingRuntime specification. Click Add . The Serving runtimes page opens and shows the updated list of runtimes that are installed. Observe that the custom runtime that you added is automatically enabled. The API protocol that you specified when creating the runtime is shown. Optional: To edit your custom runtime, click the action menu (...) and select Edit . Verification The custom model-serving runtime that you added is shown in an enabled state on the Serving runtimes page. 2.12.3. Adding a tested and verified model-serving runtime for the single-model serving platform In addition to preinstalled and custom model-serving runtimes, you can also use Red Hat tested and verified model-serving runtimes such as the NVIDIA Triton Inference Server to support your needs. For more information about Red Hat tested and verified runtimes, see Tested and verified runtimes for Red Hat OpenShift AI . You can use the Red Hat OpenShift AI dashboard to add and enable the NVIDIA Triton Inference Server runtime for the single-model serving platform. You can then choose the runtime when you deploy a model on the single-model serving platform. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings > Serving runtimes . The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled. Click Add serving runtime . In the Select the model serving platforms this runtime supports list, select Single-model serving platform . In the Select the API protocol this runtime supports list, select REST or gRPC . Click Start from scratch . If you selected the REST API protocol, enter or paste the following YAML code directly in the embedded editor. If you selected the gRPC API protocol, enter or paste the following YAML code directly in the embedded editor. In the metadata.name field, make sure that the value of the runtime you are adding does not match a runtime that you have already added). Optional: To use a custom display name for the runtime that you are adding, add a metadata.annotations.openshift.io/display-name field and specify a value, as shown in the following example: Note If you do not configure a custom display name for your runtime, OpenShift AI shows the value of the metadata.name field. Click Create . The Serving runtimes page opens and shows the updated list of runtimes that are installed. Observe that the runtime that you added is automatically enabled. The API protocol that you specified when creating the runtime is shown. Optional: To edit the runtime, click the action menu (...) and select Edit . Verification The model-serving runtime that you added is shown in an enabled state on the Serving runtimes page. Additional resources Tested and verified model-serving runtimes 2.12.4. Deploying models on the single-model serving platform When you have enabled the single-model serving platform, you can enable a preinstalled or custom model-serving runtime and start to deploy models on the platform. Note Text Generation Inference Server (TGIS) is based on an early fork of Hugging Face TGI . Red Hat will continue to develop the standalone TGIS runtime to support TGI models. If a model does not work in the current version of OpenShift AI, support might be added in a future version. In the meantime, you can also add your own, custom runtime to support a TGI model. For more information, see Adding a custom model-serving runtime for the single-model serving platform . Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have installed KServe. You have enabled the single-model serving platform. To enable token authentication and external model routes for deployed models, you have added Authorino as an authorization provider. For more information, see Adding an authorization provider for the single-model serving platform . You have created a data science project. You have access to S3-compatible object storage. For the model that you want to deploy, you know the associated folder path in your S3-compatible object storage bucket. To use the Caikit-TGIS runtime, you have converted your model to Caikit format. For an example, see Converting Hugging Face Hub models to Caikit format in the caikit-tgis-serving repository. If you want to use graphics processing units (GPUs) with your model server, you have enabled GPU support in OpenShift AI. If you use NVIDIA GPUs, see Enabling NVIDIA GPUs . If you use AMD GPUs, see AMD GPU integration . To use the vLLM runtime, you have enabled GPU support in OpenShift AI and have installed and configured the Node Feature Discovery operator on your cluster. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs To use the vLLM ServingRuntime with Gaudi accelerators support for KServe runtime, you have enabled support for hybrid processing units (HPUs) in OpenShift AI. This includes installing the Intel Gaudi AI accelerator operator and configuring an accelerator profile. For more information, see Setting up Gaudi for OpenShift and Working with accelerators . To use the vLLM ROCm ServingRuntime for KServe runtime, you have enabled support for AMD graphic processing units (GPUs) in OpenShift AI. This includes installing the AMD GPU operator and configuring an accelerator profile. For more information, see Deploying the AMD GPU operator on OpenShift and Working with accelerators . Note In OpenShift AI 2.18, Red Hat supports NVIDIA GPU, Intel Gaudi, and AMD GPU accelerators for model serving. To deploy RHEL AI models: You have enabled the vLLM ServingRuntime for KServe runtime. You have downloaded the model from the Red Hat container registry and uploaded it to S3-compatible object storage. Procedure In the left menu, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to deploy a model in. A project details page opens. Click the Models tab. Perform one of the following actions: If you see a Single-model serving platform tile, click Deploy model on the tile. If you do not see any tiles, click the Deploy model button. The Deploy model dialog opens. In the Model deployment name field, enter a unique name for the model that you are deploying. In the Serving runtime field, select an enabled runtime. From the Model framework (name - version) list, select a value. In the Number of model server replicas to deploy field, specify a value. From the Model server size list, select a value. The following options are only available if you have enabled accelerator support on your cluster and created an accelerator profile: From the Accelerator list, select an accelerator. If you selected an accelerator in the preceding step, specify the number of accelerators to use in the Number of accelerators field. Optional: In the Model route section, select the Make deployed models available through an external route checkbox to make your deployed models available to external clients. To require token authentication for inference requests to the deployed model, perform the following actions: Select Require token authentication . In the Service account name field, enter the service account name that the token will be generated for. To add an additional service account, click Add a service account and enter another service account name. To specify the location of your model, perform one of the following sets of actions: To use an existing connection Select Existing connection . From the Name list, select a connection that you previously defined. In the Path field, enter the folder path that contains the model in your specified data source. Important The OpenVINO Model Server runtime has specific requirements for how you specify the model path. For more information, see known issue RHOAIENG-3025 in the OpenShift AI release notes. To use a new connection To define a new connection that your model can access, select New connection . In the Add connection modal, select a Connection type . The S3 compatible object storage and URI options are pre-installed connection types. Additional options might be available if your OpenShift AI administrator added them. The Add connection form opens with fields specific to the connection type that you selected. Fill in the connection detail fields. Important If your connection type is an S3-compatible object storage, you must provide the folder path that contains your data file. The OpenVINO Model Server runtime has specific requirements for how you specify the model path. For more information, see known issue RHOAIENG-3025 in the OpenShift AI release notes. (Optional) Customize the runtime parameters in the Configuration parameters section: Modify the values in Additional serving runtime arguments to define how the deployed model behaves. Modify the values in Additional environment variables to define variables in the model's environment. The Configuration parameters section shows predefined serving runtime parameters, if any are available. Note Do not modify the port or model serving runtime arguments, because they require specific values to be set. Overwriting these parameters can cause the deployment to fail. Click Deploy . Verification Confirm that the deployed model is shown on the Models tab for the project, and on the Model Serving page of the dashboard with a checkmark in the Status column. 2.12.5. Setting a timeout for KServe When deploying large models or using node autoscaling with KServe, the operation may time out before a model is deployed because the default progress-deadline that KNative Serving sets is 10 minutes. If a pod using KNative Serving takes longer than 10 minutes to deploy, the pod might be automatically marked as failed. This can happen if you are deploying large models that take longer than 10 minutes to pull from S3-compatible object storage or if you are using node autoscaling to reduce the consumption of GPU nodes. To resolve this issue, you can set a custom progress-deadline in the KServe InferenceService for your application. Prerequisites You have namespace edit access for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. Select the project where you have deployed the model. In the Administrator perspective, click Home Search . From the Resources dropdown menu, search for InferenceService . Under spec.predictor.annotations , modify the serving.knative.dev/progress-deadline with the new timeout: Note Ensure that you set the progress-deadline on the spec.predictor.annotations level, so that the KServe InferenceService can copy the progress-deadline back to the KNative Service object. 2.12.6. Customizing the parameters of a deployed model-serving runtime You might need additional parameters beyond the default ones to deploy specific models or to enhance an existing model deployment. In such cases, you can modify the parameters of an existing runtime to suit your deployment needs. Note Customizing the parameters of a runtime only affects the selected model deployment. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You have deployed a model on the single-model serving platform. Procedure From the OpenShift AI dashboard, click Model Serving in the left menu. The Deployed models page opens. Click the action menu (...) to the name of the model you want to customize and select Edit . The Configuration parameters section shows predefined serving runtime parameters, if any are available. Customize the runtime parameters in the Configuration parameters section: Modify the values in Additional serving runtime arguments to define how the deployed model behaves. Modify the values in Additional environment variables to define variables in the model's environment. Note Do not modify the port or model serving runtime arguments, because they require specific values to be set. Overwriting these parameters can cause the deployment to fail. After you are done customizing the runtime parameters, click Redeploy to save and deploy the model with your changes. Verification Confirm that the deployed model is shown on the Models tab for the project, and on the Model Serving page of the dashboard with a checkmark in the Status column. Confirm that the arguments and variables that you set appear in spec.predictor.model.args and spec.predictor.model.env by one of the following methods: Checking the InferenceService YAML from the OpenShift Console. Using the following command in the OpenShift CLI: 2.12.7. Customizable model serving runtime parameters You can modify the parameters of an existing model serving runtime to suit your deployment needs. For more information about parameters for each of the supported serving runtimes, see the following table: Serving runtime Resource NVIDIA Triton Inference Server NVIDIA Triton Inference Server: Model Parameters Caikit Text Generation Inference Server (Caikit-TGIS) ServingRuntime for KServe Caikit NLP: Configuration TGIS: Model configuration Caikit Standalone ServingRuntime for KServe Caikit NLP: Configuration OpenVINO Model Server OpenVINO Model Server Features: Dynamic Input Parameters Text Generation Inference Server (TGIS) Standalone ServingRuntime for KServe TGIS: Model configuration vLLM ServingRuntime for KServe vLLM: Engine Arguments OpenAI Compatible Server Additional resources Customizing the parameters of a deployed model serving runtime 2.12.8. Using OCI containers for model storage As an alternative to storing a model in an S3 bucket or URI, you can upload models to Open Container Initiative (OCI) containers. Using OCI containers for model storage can help you: Reduce startup times by avoiding downloading the same model multiple times. Reduce disk space usage by reducing the number of models downloaded locally. Improve model performance by allowing pre-fetched images. Using OCI containers for model storage involves the following tasks: Storing a model in an OCI image Deploying a model from an OCI image Important Using OCI containers for model storage is currently available in Red Hat OpenShift AI 2.18 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.12.8.1. Storing a model in an OCI image You can store a model in an OCI image. The following procedure uses the example of storing a MobileNet v2-7 model in ONNX format. Prerequisites You have a model in the ONNX format. The example in this procedure uses the MobileNet v2-7 model in ONNX format. You have installed the Podman tool. Procedure In a terminal window on your local machine, create a temporary directory for storing both the model and the support files that you need to create the OCI image: Create a models folder inside the temporary directory: Note This example command specifies the subdirectory 1 because OpenVINO requires numbered subdirectories for model versioning. If you are not using OpenVINO, you do not need to create the 1 subdirectory to use OCI container images. Download the model and support files: Use the tree command to confirm that the model files are located in the directory structure as expected: The tree command should return a directory structure similar to the following example: Create a Docker file named Containerfile : Note Specify a base image that provides a shell. In the following example, ubi9-micro is the base container image. You cannot specify an empty image that does not provide a shell, such as scratch , because KServe uses the shell to ensure the model files are accessible to the model server. Change the ownership of the copied model files and grant read permissions to the root group to ensure that the model server can access the files. OpenShift runs containers with a random user ID and the root group ID. Use podman build commands to create the OCI container image and upload it to a registry. The following commands use Quay as the registry. Note If your repository is private, ensure that you are authenticated to the registry before uploading your container image. 2.12.8.2. Deploying a model stored in an OCI image You can deploy a model that is stored in an OCI image. The following procedure uses the example of deploying a MobileNet v2-7 model in ONNX format, stored in an OCI image on an OpenVINO model server. Note By default in KServe, models are exposed outside the cluster and not protected with authentication. Prerequisites You have stored a model in an OCI image as described in Storing a model in an OCI image . If you want to deploy a model that is stored in a private OCI repository, you must configure an image pull secret. For more information about creating an image pull secret, see Using image pull secrets . You are logged in to your OpenShift cluster. Procedure Create a project to deploy the model: Use the OpenShift AI Applications project kserve-ovms template to create a ServingRuntime resource and configure the OpenVINO model server in the new project: Verify that the ServingRuntime named kserve-ovms is created: The command should return output similar to the following: Create an InferenceService YAML resource, depending on whether the model is stored from a private or a public OCI repository: For a model stored in a public OCI repository, create an InferenceService YAML file with the following values, replacing <user_name> , <repository_name> , and <tag_name> with values specific to your environment: For a model stored in a private OCI repository, create an InferenceService YAML file that specifies your pull secret in the spec.predictor.imagePullSecrets field, as shown in the following example: After you create the InferenceService resource, KServe deploys the model stored in the OCI image referred to by the storageUri field. Verification Check the status of the deployment: The command should return output that includes information, such as the URL of the deployed model and its readiness state. 2.12.9. Using accelerators with vLLM OpenShift AI includes support for NVIDIA, AMD and Intel Gaudi accelerators. OpenShift AI also includes preinstalled model-serving runtimes that provide accelerator support. 2.12.9.1. NVIDIA GPUs You can serve models with NVIDIA graphics processing units (GPUs) by using the vLLM ServingRuntime for KServe runtime. To use the runtime, you must enable GPU support in OpenShift AI. This includes installing and configuring the Node Feature Discovery operator on your cluster. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . 2.12.9.2. Intel Gaudi accelerators You can serve models with Intel Gaudi accelerators by using the vLLM ServingRuntime with Gaudi accelerators support for KServe runtime. To use the runtime, you must enable hybrid processing support (HPU) support in OpenShift AI. This includes installing the Intel Gaudi AI accelerator operator and configuring an accelerator profile. For more information, see Setting up Gaudi for OpenShift and Working with accelerator profiles . For information about recommended vLLM parameters, environment variables, supported configurations and more, see vLLM with Intel(R) Gaudi(R) AI Accelerators . Note Warm-up is a model initialization and performance optimization step that is useful for reducing cold-start delays and first-inference latency. Depending on the model size, warm-up can lead to longer model loading times. While highly recommended in production environments to avoid performance limitations, you can choose to skip warm-up for non-production environments to reduce model loading times and accelerate model development and testing cycles. To skip warm-up, follow the steps described in Customizing the parameters of a deployed model-serving runtime to add the following environment variable in the Configuration parameters section of your model deployment: 2.12.9.3. AMD GPUs You can serve models with AMD GPUs by using the vLLM ROCm ServingRuntime for KServe runtime. To use the runtime, you must enable support for AMD graphic processing units (GPUs) in OpenShift AI. This includes installing the AMD GPU operator and configuring an accelerator profile. For more information, see Deploying the AMD GPU operator on OpenShift and Working with accelerator profiles . Additional resources Supported model-serving runtimes 2.12.10. Customizing the vLLM model-serving runtime In certain cases, you may need to add additional flags or environment variables to the vLLM ServingRuntime for KServe runtime to deploy a family of LLMs. The following procedure describes customizing the vLLM model-serving runtime to deploy a Llama, Granite or Mistral model. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. For Llama model deployment, you have downloaded a meta-llama-3 model to your object storage. For Granite model deployment, you have downloaded a granite-7b-instruct or granite-20B-code-instruct model to your object storage. For Mistral model deployment, you have downloaded a mistral-7B-Instruct-v0.3 model to your object storage. You have enabled the vLLM ServingRuntime for KServe runtime. You have enabled GPU support in OpenShift AI and have installed and configured the Node Feature Discovery operator on your cluster. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs Procedure Follow the steps to deploy a model as described in Deploying models on the single-model serving platform . In the Serving runtime field, select vLLM ServingRuntime for KServe . If you are deploying a meta-llama-3 model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section: 1 Sets the backend to multiprocessing for distributed model workers 2 Sets the maximum context length of the model to 6144 tokens If you are deploying a granite-7B-instruct model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section: 1 Sets the backend to multiprocessing for distributed model workers If you are deploying a granite-20B-code-instruct model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section: 1 Sets the backend to multiprocessing for distributed model workers 2 Distributes inference across 4 GPUs in a single node 3 Sets the maximum context length of the model to 6448 tokens If you are deploying a mistral-7B-Instruct-v0.3 model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section: 1 Sets the backend to multiprocessing for distributed model workers 2 Sets the maximum context length of the model to 15344 tokens Click Deploy . Verification Confirm that the deployed model is shown on the Models tab for the project, and on the Model Serving page of the dashboard with a checkmark in the Status column. For granite models, use the following example command to verify API requests to your deployed model: Additional resources vLLM: Engine Arguments 2.13. Deploying models by using multiple GPU nodes Deploy models across multiple GPU nodes to handle large models, such as large language models (LLMs). This procedure shows you how to serve models on Red Hat OpenShift AI across multiple GPU nodes using the vLLM serving framework. Multi-node inferencing uses the vllm-multinode-runtime custom runtime. The vllm-multinode-runtime runtime uses the same image as the VLLM ServingRuntime for KServe runtime and also includes information necessary for multi-GPU inferencing. Important Deploying models by using multiple GPU nodes is currently available in Red Hat OpenShift AI as a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). For more information, see Installing the OpenShift CLI . You have enabled the operators for your GPU type, such as Node Feature Discovery Operator, NVIDIA GPU Operator. For more information about enabling accelerators, see Enabling accelerators . You are using an NVIDIA GPU ( nvidia.com/gpu ). You have specified the GPU type through either the ServingRuntime or InferenceService . If the GPU type specified in the ServingRuntime differs from what is set in the InferenceService , both GPU types are assigned to the resource and can cause errors. You have enabled KServe on your cluster. You have only one head pod in your setup. Do not adjust the replica count using the min_replicas or max_replicas settings in the InferenceService . Creating additional head pods can cause them to be excluded from the Ray cluster. You have a persistent volume claim (PVC) set up and configured for ReadWriteMany (RWX) access mode. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Select or create a namespace for deploying the model. For example, you can create the kserve-demo namespace by running the following command: From the namespace where you would like to deploy the model, create a PVC for model storage. Create a storage class using Filesystem volumeMode . Use this storage class for your PVC. The storage size must be larger than the size of the model files on disk. For example: Note If you have already configured a PVC, you can skip this step. To download the model to the PVC, modify the sample YAML provided: 1 The chmod operation is permitted only if your pod is running as root. Remove`chmod -R 777` from the arguments if you are not running the pod as root. 2 7 Specify the path to the model. 3 The value for containers.image , located in your InferenceService . To access this value, run the following command: oc get configmap inferenceservice-config -n redhat-ods-operator -oyaml | grep kserve-storage-initializer: 4 The access key ID to your S3 bucket. 5 The secret access key to your S3 bucket. 6 The name of your S3 bucket. 8 The endpoint to your S3 bucket. 9 The region for your S3 bucket if using an AWS S3 bucket. If using other S3-compatible storage, such as ODF or Minio, you can remove the AWS_DEFAULT_REGION environment variable. 10 If you encounter SSL errors, change S3_VERIFY_SSL to false . Create the vllm-multinode-runtime custom runtime: Deploy the model using the following InferenceService configuration: The following configuration can be added to the InferenceService : workerSpec.tensorParallelSize : Determines how many GPUs are used per node. The GPU type count in both the head and worker node deployment resources is updated automatically. Ensure that the value of workerSpec.tensorParallelSize is at least 1 . workerSpec.pipelineParallelSize : Determines how many nodes are used to balance the model in deployment. This variable represents the total number of nodes, including both the head and worker nodes. Ensure that the value of workerSpec.pipelineParallelSize is at least 2 . Do not modify this value in production environments. Note You may need to specify additional arguments, depending on your environment and model size. Verification To confirm that you have set up your environment to deploy models on multiple GPU nodes, check the GPU resource status, the InferenceService status, the ray cluster status, and send a request to the model. Check the GPU resource status: Retrieve the pod names for the head and worker nodes: Sample response Confirm that the model loaded properly by checking the values of <1> and <2>. If the model did not load, the value of these fields is 0MiB . Verify the status of your InferenceService using the following command: NOTE: In the Technology Preview, you can only use port forwarding for inferencing. Sample response Send a request to the model to confirm that the model is available for inference: 2.14. Making inference requests to models deployed on the single-model serving platform When you deploy a model by using the single-model serving platform, the model is available as a service that you can access using API requests. This enables you to return predictions based on data inputs. To use API requests to interact with your deployed model, you must know the inference endpoint for the model. In addition, if you secured your inference endpoint by enabling token authentication, you must know how to access your authentication token so that you can specify this in your inference requests. 2.14.1. Accessing the authentication token for a deployed model If you secured your model inference endpoint by enabling token authentication, you must know how to access your authentication token so that you can specify it in your inference requests. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have deployed a model by using the single-model serving platform. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that contains your deployed model. A project details page opens. Click the Models tab. In the Models and model servers list, expand the section for your model. Your authentication token is shown in the Token authentication section, in the Token secret field. Optional: To copy the authentication token for use in an inference request, click the Copy button ( ) to the token value. 2.14.2. Accessing the inference endpoint for a deployed model To make inference requests to your deployed model, you must know how to access the inference endpoint that is available. For a list of paths to use with the supported runtimes and example commands, see Inference endpoints . Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have deployed a model by using the single-model serving platform. If you enabled token authentication for your deployed model, you have the associated token value. Procedure From the OpenShift AI dashboard, click Model Serving . The inference endpoint for the model is shown in the Inference endpoint field. Depending on what action you want to perform with the model (and if the model supports that action), copy the inference endpoint and then add a path to the end of the URL. Use the endpoint to make API requests to your deployed model. Additional resources Text Generation Inference Server (TGIS) Caikit API documentation Caikit NLP GitHub project OpenVINO KServe-compatible REST API documentation OpenAI API documentation 2.15. Configuring monitoring for the single-model serving platform The single-model serving platform includes metrics for supported runtimes of the KServe component. KServe does not generate its own metrics and relies on the underlying model-serving runtimes to provide them. The set of available metrics for a deployed model depends on its model-serving runtime. In addition to runtime metrics for KServe, you can also configure monitoring for OpenShift Service Mesh. The OpenShift Service Mesh metrics help you to understand dependencies and traffic flow between components in the mesh. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have created OpenShift Service Mesh and Knative Serving instances and installed KServe. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You are familiar with creating a config map for monitoring a user-defined workflow. You will perform similar steps in this procedure. You are familiar with enabling monitoring for user-defined projects in OpenShift. You will perform similar steps in this procedure. You have assigned the monitoring-rules-view role to users that will monitor metrics. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Define a ConfigMap object in a YAML file called uwm-cm-conf.yaml with the following contents: The user-workload-monitoring-config object configures the components that monitor user-defined projects. Observe that the retention time is set to the recommended value of 15 days. Apply the configuration to create the user-workload-monitoring-config object. Define another ConfigMap object in a YAML file called uwm-cm-enable.yaml with the following contents: The cluster-monitoring-config object enables monitoring for user-defined projects. Apply the configuration to create the cluster-monitoring-config object. Create ServiceMonitor and PodMonitor objects to monitor metrics in the service mesh control plane as follows: Create an istiod-monitor.yaml YAML file with the following contents: Deploy the ServiceMonitor CR in the specified istio-system namespace. You see the following output: Create an istio-proxies-monitor.yaml YAML file with the following contents: Deploy the PodMonitor CR in the specified istio-system namespace. You see the following output: 2.16. Viewing model-serving runtime metrics for the single-model serving platform When a cluster administrator has configured monitoring for the single-model serving platform, non-admin users can use the OpenShift web console to view model-serving runtime metrics for the KServe component. Prerequisites A cluster administrator has configured monitoring for the single-model serving platform. You have been assigned the monitoring-rules-view role. For more information, see Granting users permission to configure monitoring for user-defined projects . You are familiar with how to monitor project metrics in the OpenShift web console. For more information, see Monitoring your project metrics . Procedure Log in to the OpenShift web console. Switch to the Developer perspective. In the left menu, click Observe . As described in Monitoring your project metrics , use the web console to run queries for model-serving runtime metrics. You can also run queries for metrics that are related to OpenShift Service Mesh. Some examples are shown. The following query displays the number of successful inference requests over a period of time for a model deployed with the vLLM runtime: Note Certain vLLM metrics are available only after an inference request is processed by a deployed model. To generate and view these metrics, you must first make an inference request to the model. The following query displays the number of successful inference requests over a period of time for a model deployed with the standalone TGIS runtime: The following query displays the number of successful inference requests over a period of time for a model deployed with the Caikit Standalone runtime: The following query displays the number of successful inference requests over a period of time for a model deployed with the OpenVINO Model Server runtime: Additional resources OVMS metrics TGIS metrics vLLM metrics 2.17. Monitoring model performance In the single-model serving platform, you can view performance metrics for a specific model that is deployed on the platform. 2.17.1. Viewing performance metrics for a deployed model You can monitor the following metrics for a specific model that is deployed on the single-model serving platform: Number of requests - The number of requests that have failed or succeeded for a specific model. Average response time (ms) - The average time it takes a specific model to respond to requests. CPU utilization (%) - The percentage of the CPU limit per model replica that is currently utilized by a specific model. Memory utilization (%) - The percentage of the memory limit per model replica that is utilized by a specific model. You can specify a time range and a refresh interval for these metrics to help you determine, for example, when the peak usage hours are and how the model is performing at a specified time. Prerequisites You have installed Red Hat OpenShift AI. A cluster admin has enabled user workload monitoring (UWM) for user-defined projects on your OpenShift cluster. For more information, see Enabling monitoring for user-defined projects and Configuring monitoring for the single-model serving platform . You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. The following dashboard configuration options are set to the default values as shown: For more information, see Dashboard configuration options . You have deployed a model on the single-model serving platform by using a preinstalled runtime. Note Metrics are only supported for models deployed by using a preinstalled model-serving runtime or a custom runtime that is duplicated from a preinstalled runtime. Procedure From the OpenShift AI dashboard navigation menu, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that contains the data science models that you want to monitor. In the project details page, click the Models tab. Select the model that you are interested in. On the Endpoint performance tab, set the following options: Time range - Specifies how long to track the metrics. You can select one of these values: 1 hour, 24 hours, 7 days, and 30 days. Refresh interval - Specifies how frequently the graphs on the metrics page are refreshed (to show the latest data). You can select one of these values: 15 seconds, 30 seconds, 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, and 1 day. Scroll down to view data graphs for number of requests, average response time, CPU utilization, and memory utilization. Verification The Endpoint performance tab shows graphs of metrics for the model. 2.17.2. Deploying a Grafana metrics dashboard You can deploy a Grafana metrics dashboard for User Workload Monitoring (UWM) to monitor performance and resource usage metrics for models deployed on the single-model serving platform. You can create a Kustomize overlay, similar to this example . Use the overlay to deploy preconfigured metrics dashboards for models deployed with OpenVino Model Server (OVMS) and vLLM. Prerequisites You have cluster admin privileges for your OpenShift cluster. A cluster admin has enabled user workload monitoring (UWM) for user-defined projects on your OpenShift cluster. For more information, see Enabling monitoring for user-defined projects and Configuring monitoring for the single-model serving platform . You have installed the OpenShift command-line interface (CLI). For more information, see Installing the OpenShift CLI . You have created an overlay to deploy a Grafana instance, similar to this example . Note To view GPU metrics, you must enable the NVIDIA GPU monitoring dashboard as described in Enabling the GPU monitoring dashboard . The GPU monitoring dashboard provides a comprehensive view of GPU utilization, memory usage, and other metrics for your GPU nodes. Procedure In a terminal window, log in to the OpenShift CLI as a cluster administrator. If you have not already created the overlay to install the Grafana operator and metrics dashboards, refer to the RHOAI UWM repository to create it. Install the Grafana instance and metrics dashboards on your OpenShift cluster with the overlay that you created. Replace <overlay-name> with the name of your overlay. Retrieve the URL of the Grafana instance. Replace <namespace> with the namespace that contains the Grafana instance. You see output similar to the following example. Use the URL to access the Grafana instance: Verification You can access the preconfigured dashboards available for KServe, vLLM and OVMS on the Grafana instance. 2.18. Optimizing model-serving runtimes You can optionally enhance the preinstalled model-serving runtimes available in OpenShift AI to leverage additional benefits and capabilities, such as optimized inferencing, reduced latency, and fine-tuned resource allocation. 2.18.1. Enabling speculative decoding and multi-modal inferencing You can configure the vLLM ServingRuntime for KServe runtime to use speculative decoding, a parallel processing technique to optimize inferencing time for large language models (LLMs). You can also configure the runtime to support inferencing for vision-language models (VLMs). VLMs are a subset of multi-modal models that integrate both visual and textual data. The following procedure describes customizing the vLLM ServingRuntime for KServe runtime for speculative decoding and multi-modal inferencing. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. If you are using the vLLM model-serving runtime for speculative decoding with a draft model, you have stored the original model and the speculative model in the same folder within your S3-compatible object storage. Procedure Follow the steps to deploy a model as described in Depyoing models on the single-model serving platform . In the Serving runtime field, select the vLLM ServingRuntime for KServe runtime. To configure the vLLM model-serving runtime for speculative decoding by matching n-grams in the prompt, add the following arguments under Additional serving runtime arguments in the Configuration parameters section: Replace <NUM_SPECULATIVE_TOKENS> and <NGRAM_PROMPT_LOOKUP_MAX> with your own values. Note Inferencing throughput varies depending on the model used for speculating with n-grams. To configure the vLLM model-serving runtime for speculative decoding with a draft model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section: Replace <path_to_speculative_model> and <path_to_original_model> with the paths to the speculative model and original model on your S3-compatible object storage. Replace <NUM_SPECULATIVE_TOKENS> with your own value. To configure the vLLM model-serving runtime for multi-modal inferencing, add the following arguments under Additional serving runtime arguments in the Configuration parameters section: Note Only use the --trust-remote-code argument with models from trusted sources. Click Deploy . Verification If you have configured the vLLM model-serving runtime for speculative decoding, use the following example command to verify API requests to your deployed model: If you have configured the vLLM model-serving runtime for multi-modal inferencing, use the following example command to verify API requests to the vision-language model (VLM) that you have deployed: Additional resources vLLM: Engine Arguments OpenAI Compatible Server 2.19. Performance optimization and tuning 2.19.1. Determining GPU requirements for LLM-powered applications There are several factors to consider when choosing GPUs for applications powered by a Large Language Model (LLM) hosted on OpenShift AI. The following guidelines help you determine the hardware requirements for your application, depending on the size and expected usage of your model. Estimating memory needs : A general rule of thumb is that a model with N parameters in 16-bit precision requires approximately 2N bytes of GPU memory. For example, an 8-billion-parameter model requires around 16GB of GPU memory, while a 70-billion-parameter model requires around 140GB. Quantization : To reduce memory requirements and potentially improve throughput, you can use quantization to load or run the model at lower-precision formats such as INT8, FP8, or INT4. This reduces the memory footprint at the expense of a slight reduction in model accuracy. Note The vLLM ServingRuntime for KServe model-serving runtime supports several quantization methods. For more information about supported implementations and compatible hardware, see Supported hardware for quantization kernels . Additional memory for key-value cache : In addition to model weights, GPU memory is also needed to store the attention key-value (KV) cache, which increases with the number of requests and the sequence length of each request. This can impact performance in real-time applications, especially for larger models. Recommended GPU configurations : Small Models (1B-8B parameters) : For models in the range, a GPU with 24GB of memory is generally sufficient to support a small number of concurrent users. Medium Models (10B-34B parameters) : Models under 20B parameters require at least 48GB of GPU memory. Models that are between 20B - 34B parameters require at least 80GB or more of memory in a single GPU. Large Models (70B parameters) : Models in this range may need to be distributed across multiple GPUs by using tensor parallelism techniques. Tensor parallelism allows the model to span multiple GPUs, improving inter-token latency and increasing the maximum batch size by freeing up additional memory for KV cache. Tensor parallelism works best when GPUs have fast interconnects such as an NVLink. Very Large Models (405B parameters) : For extremely large models, quantization is recommended to reduce memory demands. You can also distribute the model using pipeline parallelism across multiple GPUs, or even across two servers. This approach allows you to scale beyond the memory limitations of a single server, but requires careful management of inter-server communication for optimal performance. For best results, start with smaller models and then scale up to larger models as required, using techniques such as parallelism and quantization to meet your performance and memory requirements. Additional resources Distributed serving 2.19.2. Performance considerations for text-summarization and retrieval-augmented generation (RAG) applications There are additional factors that need to be taken into consideration for text-summarization and RAG applications, as well as for LLM-powered services that process large documents uploaded by users. Longer Input Sequences : The input sequence length can be significantly longer than in a typical chat application, if each user query includes a large prompt or a large amount of context such as an uploaded document. The longer input sequence length increases the prefill time , the time the model takes to process the initial input sequence before generating a response, which can then lead to a higher Time-to-First-Token (TTFT) . A longer TTFT may impact the responsiveness of the application. Minimize this latency for optimal user experience. KV Cache Usage : Longer sequences require more GPU memory for the key-value (KV) cache . The KV cache stores intermediate attention data to improve model performance during generation. A high KV cache utilization per request requires a hardware setup with sufficient GPU memory. This is particularly crucial if multiple users are querying the model concurrently, as each request adds to the total memory load. Optimal Hardware Configuration : To maintain responsiveness and avoid memory bottlenecks, select a GPU configuration with sufficient memory. For instance, instead of running an 8B model on a single 24GB GPU, deploying it on a larger GPU (e.g., 48GB or 80GB) or across multiple GPUs can improve performance by providing more memory headroom for the KV cache and reducing inter-token latency. Multi-GPU setups with tensor parallelism can also help manage memory demands and improve efficiency for larger input sequences. In summary, to ensure optimal responsiveness and scalability for document-based applications, you must prioritize hardware with high GPU memory capacity and also consider multi-GPU configurations to handle the increased memory requirements of long input sequences and KV caching. 2.19.3. Inference performance metrics Latency , throughput and cost per million tokens are key metrics to consider when evaluating the response generation efficiency of a model during inferencing. These metrics provide a comprehensive view of a model's inference performance and can help balance speed, efficiency, and cost for different use cases. 2.19.3.1. Latency Latency is critical for interactive or real-time use cases, and is measured using the following metrics: Time-to-First-Token (TTFT) : The delay in milliseconds between the initial request and the generation of the first token. This metric is important for streaming responses. Inter-Token Latency (ITL) : The time taken in milliseconds to generate each subsequent token after the first, also relevant for streaming. Time-Per-Output-Token (TPOT) : For non-streaming requests, the average time taken in milliseconds to generate each token in an output sequence. 2.19.3.2. Throughput Throughput measures the overall efficiency of a model server and is expressed with the following metrics: Tokens per Second (TPS) : The total number of tokens generated per second across all active requests. Requests per Second (RPS) : The number of requests processed per second. RPS, like response time, is sensitive to sequence length. 2.19.3.3. Cost per million tokens Cost per Million Tokens measures the cost-effectiveness of a model's inference, indicating the expense incurred per million tokens generated. This metric helps to assess both the economic feasibility and scalability of deploying the model. 2.19.4. Resolving CUDA out-of-memory errors In certain cases, depending on the model and hardware accelerator used, the TGIS memory auto-tuning algorithm might underestimate the amount of GPU memory needed to process long sequences. This miscalculation can lead to Compute Unified Architecture (CUDA) out-of-memory (OOM) error responses from the model server. In such cases, you must update or add additional parameters in the TGIS model-serving runtime, as described in the following procedure. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings > Serving runtimes . The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled. Based on the runtime that you used to deploy your model, perform one of the following actions: If you used the pre-installed TGIS Standalone ServingRuntime for KServe runtime, duplicate the runtime to create a custom version and then follow the remainder of this procedure. For more information about duplicating the pre-installed TGIS runtime, see Adding a custom model-serving runtime for the single-model serving platform . If you were already using a custom TGIS runtime, click the action menu (...) to the runtime and select Edit . The embedded YAML editor opens and shows the contents of the custom model-serving runtime. Add or update the BATCH_SAFETY_MARGIN environment variable and set the value to 30. Similarly, add or update the ESTIMATE_MEMORY_BATCH_SIZE environment variable and set the value to 8. Note The BATCH_SAFETY_MARGIN parameter sets a percentage of free GPU memory to hold back as a safety margin to avoid OOM conditions. The default value of BATCH_SAFETY_MARGIN is 20 . The ESTIMATE_MEMORY_BATCH_SIZE parameter sets the batch size used in the memory auto-tuning algorithm. The default value of ESTIMATE_MEMORY_BATCH_SIZE is 16 . Click Update . The Serving runtimes page opens and shows the list of runtimes that are installed. Observe that the custom model-serving runtime you updated is shown. To redeploy the model for the parameter updates to take effect, perform the following actions: From the OpenShift AI dashboard, click Model Serving > Deployed Models . Find the model you want to redeploy, click the action menu (...) to the model, and select Delete . Redeploy the model as described in Deploying models on the single-model serving platform . Verification You receive successful responses from the model server and no longer see CUDA OOM errors. 2.20. About the NVIDIA NIM model serving platform You can deploy models using NVIDIA NIM inference services on the NVIDIA NIM model serving platform . NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of microservices designed for secure, reliable deployment of high performance AI model inferencing across clouds, data centers and workstations. Additional resources NVIDIA NIM 2.20.1. Enabling the NVIDIA NIM model serving platform As an administrator, you can use the Red Hat OpenShift AI dashboard to enable the NVIDIA NIM model serving platform. Note If you previously enabled the NVIDIA NIM model serving platform in OpenShift AI, and then upgraded to a newer version, re-enter your NVIDIA NGC API key to re-enable the NVIDIA NIM model serving platform. Prerequisites You have logged in to Red Hat OpenShift AI as an administrator. You have enabled the single-model serving platform. You do not need to enable a preinstalled runtime. For more information about enabling the single-model serving platform, see Enabling the single-model serving platform . The disableNIMModelServing OpenShift AI dashboard configuration is set to false : For more information, see Dashboard configuration options . You have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . You have an NVIDIA Cloud Account (NCA) and can access the NVIDIA GPU Cloud (NGC) portal. For more information, see NVIDIA GPU Cloud user guide . Your NCA account is associated with the NVIDIA AI Enterprise Viewer role. You have generated an NGC API key on the NGC portal. For more information, see NGC API keys . Procedure Log in to OpenShift AI. In the left menu of the OpenShift AI dashboard, click Applications Explore . On the Explore page, find the NVIDIA NIM tile. Click Enable on the application tile. Enter the NGC API key and then click Submit . Verification The NVIDIA NIM application that you enabled appears on the Enabled page. 2.20.2. Deploying models on the NVIDIA NIM model serving platform When you have enabled the NVIDIA NIM model serving platform , you can start to deploy NVIDIA-optimized models on the platform. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have enabled the NVIDIA NIM model serving platform . You have created a data science project. You have enabled support for graphic processing units (GPUs) in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . Procedure In the left menu, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to deploy a model in. A project details page opens. Click the Models tab. In the Models section, perform one of the following actions: On the NVIDIA NIM model serving platform tile, click Select NVIDIA NIM on the tile, and then click Deploy model . If you have previously selected the NVIDIA NIM model serving type, the Models page displays NVIDIA model serving enabled on the upper-right corner, along with the Deploy model button. To proceed, click Deploy model . The Deploy model dialog opens. Configure properties for deploying your model as follows: In the Model deployment name field, enter a unique name for the deployment. From the NVIDIA NIM list, select the NVIDIA NIM model that you want to deploy. For more information, see Supported Models In the NVIDIA NIM storage size field, specify the size of the cluster storage instance that will be created to store the NVIDIA NIM model. In the Number of model server replicas to deploy field, specify a value. From the Model server size list, select a value. From the Accelerator list, select an accelerator. The Number of accelerators field appears. In the Number of accelerators field, specify the number of accelerators to use. The default value is 1. Optional: In the Model route section, select the Make deployed models available through an external route checkbox to make your deployed models available to external clients. To require token authentication for inference requests to the deployed model, perform the following actions: Select Require token authentication . In the Service account name field, enter the service account name that the token will be generated for. To add an additional service account, click Add a service account and enter another service account name. Click Deploy . Verification Confirm that the deployed model is shown on the Models tab for the project, and on the Model Serving page of the dashboard with a checkmark in the Status column. Additional resources NVIDIA NIM API reference Supported Models 2.20.3. Enabling NVIDIA NIM metrics for an existing NIM deployment If you have previously deployed a NIM model in OpenShift AI, and then upgraded to 2.18, you must manually enable NIM metrics for your existing deployment by adding annotations to enable metrics collection and graph generation. Note NIM metrics and graphs are automatically enabled for new deployments in 2.17. 2.20.3.1. Enabling graph generation for an existing NIM deployment The following procedure describes how to enable graph generation for an existing NIM deployment. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). For more information, see Installing the OpenShift CLI . You have an existing NIM deployment in OpenShift AI. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI. Confirm the name of the ServingRuntime associated with your NIM deployment: Replace <namespace> with the namespace of the project where your NIM model is deployed. Check for an existing metadata.annotations section in the ServingRuntime configuration: Replace <servingruntime-name> with the name of the ServingRuntime from the step. Perform one of the following actions: If the metadata.annotations section is not present in the configuration, add the section with the required annotations: You see output similar to the following: If there is an existing metadata.annotations section, add the required annotations to the section: You see output similar to the following: Verification Confirm that the annotation has been added to the ServingRuntime of your existing NIM deployment. The annotation that you added appears in the output: Note For metrics to be available for graph generation, you must also enable metrics collection for your deployment. Please see Enabling metrics collection for an existing NIM deployment . 2.20.3.2. Enabling metrics collection for an existing NIM deployment To enable metrics collection for your existing NIM deployment, you must manually add the Prometheus endpoint and port annotations to the InferenceService of your deployment. The following procedure describes how to add the required Prometheus annotations to the InferenceService of your NIM deployment. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). For more information, see Installing the OpenShift CLI . You have an existing NIM deployment in OpenShift AI. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI. Confirm the name of the InferenceService associated with your NIM deployment: Replace <namespace> with the namespace of the project where your NIM model is deployed. Check if there is an existing spec.predictor.annotations section in the InferenceService configuration: Replace <inferenceservice-name> with the name of the InferenceService from the step. Perform one of the following actions: If the spec.predictor.annotations section does not exist in the configuration, add the section and required annotations: The annotation that you added appears in the output: If there is an existing spec.predictor.annotations section, add the Prometheus annotations to the section: The annotations that you added appears in the output: Verification Confirm that the annotations have been added to the InferenceService . You see the annotation that you added in the output: 2.20.4. Viewing NVIDIA NIM metrics for a NIM model In OpenShift AI, you can observe the following NVIDIA NIM metrics for a NIM model deployed on the NVIDIA NIM model serving platform: GPU cache usage over time (ms) Current running, waiting, and max requests count Tokens count Time to first token Time per output token Request outcomes You can specify a time range and a refresh interval for these metrics to help you determine, for example, the peak usage hours and model performance at a specified time. Prerequisites You have enabled the NVIDIA NIM model serving platform. You have deployed a NIM model on the NVIDIA NIM model serving platform. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. The disableKServeMetrics OpenShift AI dashboard configuration option is set to its default value of false : For more information, see Dashboard configuration options . Procedure From the OpenShift AI dashboard navigation menu, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that contains the NIM model that you want to monitor. In the project details page, click the Models tab. Click the NIM model that you want to observe. On the NIM Metrics tab, set the following options: Time range - Specifies how long to track the metrics. You can select one of these values: 1 hour, 24 hours, 7 days, and 30 days. Refresh interval - Specifies how frequently the graphs on the metrics page are refreshed (to show the latest data). You can select one of these values: 15 seconds, 30 seconds, 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, and 1 day. Scroll down to view data graphs for NIM metrics. Verification The NIM Metrics tab shows graphs of NIM metrics for the deployed NIM model. Additional resources NVIDIA NIM observability 2.20.5. Viewing performance metrics for a NIM model You can observe the following performance metrics for a NIM model deployed on the NVIDIA NIM model serving platform: Number of requests - The number of requests that have failed or succeeded for a specific model. Average response time (ms) - The average time it takes a specific model to respond to requests. CPU utilization (%) - The percentage of the CPU limit per model replica that is currently utilized by a specific model. Memory utilization (%) - The percentage of the memory limit per model replica that is utilized by a specific model. You can specify a time range and a refresh interval for these metrics to help you determine, for example, the peak usage hours and model performance at a specified time. Prerequisites You have enabled the NVIDIA NIM model serving platform. You have deployed a NIM model on the NVIDIA NIM model serving platform. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. The disableKServeMetrics OpenShift AI dashboard configuration option is set to its default value of false : For more information, see Dashboard configuration options . Procedure From the OpenShift AI dashboard navigation menu, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that contains the NIM model that you want to monitor. In the project details page, click the Models tab. Click the NIM model that you want to observe. On the Endpoint performance tab, set the following options: Time range - Specifies how long to track the metrics. You can select one of these values: 1 hour, 24 hours, 7 days, and 30 days. Refresh interval - Specifies how frequently the graphs on the metrics page are refreshed to show the latest data. You can select one of these values: 15 seconds, 30 seconds, 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, and 1 day. Scroll down to view data graphs for performance metrics. Verification The Endpoint performance tab shows graphs of performance metrics for the deployed NIM model. | [
"apiVersion: serving.kserve.io/v1alpha1 kind: ServingRuntime metadata: annotations: opendatahub.io/recommended-accelerators: '[\"nvidia.com/gpu\"]' 1 openshift.io/display-name: vLLM ServingRuntime for KServe 2 labels: opendatahub.io/dashboard: \"true\" name: vllm-runtime spec: annotations: prometheus.io/path: /metrics 3 prometheus.io/port: \"8080\" 4 containers : - args: - --port=8080 - --model=/mnt/models 5 - --served-model-name={{.Name}} 6 command: 7 - python - '-m' - vllm.entrypoints.openai.api_server env: - name: HF_HOME value: /tmp/hf_home image: 8 quay.io/modh/vllm@sha256:8a3dd8ad6e15fe7b8e5e471037519719d4d8ad3db9d69389f2beded36a6f5b21 name: kserve-container ports: - containerPort: 8080 protocol: TCP multiModel: false 9 supportedModelFormats: 10 - autoSelect: true name: vLLM",
"apiVersion: serving.kserve.io/v1beta1 kind: InferenceService metadata: annotations: openshift.io/display-name: granite serving.knative.openshift.io/enablePassthrough: 'true' sidecar.istio.io/inject: 'true' sidecar.istio.io/rewriteAppHTTPProbers: 'true' name: granite labels: opendatahub.io/dashboard: 'true' spec: predictor: maxReplicas: 1 minReplicas: 1 model: modelFormat: name: vLLM name: '' resources: limits: cpu: '6' memory: 24Gi nvidia.com/gpu: '1' requests: cpu: '1' memory: 8Gi nvidia.com/gpu: '1' runtime: vLLM ServingRuntime for KServe storage: key: aws-connection-my-storage path: models/granite-7b-instruct/ tolerations: - effect: NoSchedule key: nvidia.com/gpu operator: Exists",
"curl --json '{\"model_id\": \"<model_name__>\", \"inputs\": \"<text>\"}' https://<inference_endpoint_url>:443/api/v1/task/server-streaming-text-generation -H 'Authorization: Bearer <token>'",
"curl -H 'Content-Type: application/json' -d '{\"inputs\": \"<text>\", \"model_id\": \"<model_id>\"}' <inference_endpoint_url>/api/v1/task/embedding -H 'Authorization: Bearer <token>'",
"grpcurl -d '{\"text\": \"<text>\"}' -H \\\"mm-model-id: <model_id>\\\" <inference_endpoint_url>:443 caikit.runtime.Nlp.NlpService/EmbeddingTaskPredict -H 'Authorization: Bearer <token>'",
"grpcurl -proto text-generation-inference/proto/generation.proto -d '{\"requests\": [{\"text\":\"<text>\"}]}' -H 'Authorization: Bearer <token>' -insecure <inference_endpoint_url>:443 fmaas.GenerationService/Generate",
"curl -ks <inference_endpoint_url>/v2/models/<model_name>/infer -d '{ \"model_name\": \"<model_name>\", \"inputs\": [{ \"name\": \"<name_of_model_input>\", \"shape\": [<shape>], \"datatype\": \"<data_type>\", \"data\": [<data>] }]}' -H 'Authorization: Bearer <token>'",
"containers: - args: - --chat-template=<CHAT_TEMPLATE>",
"curl -v https://<inference_endpoint_url>:443/v1/chat/completions -H \"Content-Type: application/json\" -d '{ \"messages\": [{ \"role\": \"<role>\", \"content\": \"<content>\" }] -H 'Authorization: Bearer <token>'",
"curl -ks <inference_endpoint_url>/v2/models/<model_name>/infer -d '{ \"model_name\": \"<model_name>\", \"inputs\": [{ \"name\": \"<name_of_model_input>\", \"shape\": [<shape>], \"datatype\": \"<data_type>\", \"data\": [<data>] }]}' -H 'Authorization: Bearer <token>'",
"grpcurl -cacert ./openshift_ca_istio_knative.crt -proto ./grpc_predict_v2.proto -d @ -H \"Authorization: Bearer <token>\" <inference_endpoint_url>:443 inference.GRPCInferenceService/ModelMetadata",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"oc edit dsci -n redhat-ods-operator",
"spec: components: serviceMesh: managementState: Removed",
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"kserve: defaultDeploymentMode: RawDeployment managementState: Managed serving: managementState: Removed name: knative-serving",
"apiVersion: v1 kind: Secret metadata: annotations: serving.kserve.io/s3-endpoint: <AWS_ENDPOINT> serving.kserve.io/s3-usehttps: \"1\" serving.kserve.io/s3-region: <AWS_REGION> serving.kserve.io/s3-useanoncredential: \"false\" name: <Secret-name> stringData: AWS_ACCESS_KEY_ID: \"<AWS_ACCESS_KEY_ID>\" AWS_SECRET_ACCESS_KEY: \"<AWS_SECRET_ACCESS_KEY>\"",
"oc apply -f secret.yaml -n <namespace>",
"apiVersion: v1 kind: ServiceAccount metadata: name: models-bucket-sa secrets: - name: s3creds",
"oc apply -f serviceAccount.yaml -n <namespace>",
"apiVersion: serving.kserve.io/v1alpha1 kind: ServingRuntime metadata: name: ovms-runtime spec: annotations: prometheus.io/path: /metrics prometheus.io/port: \"8888\" containers: - args: - --model_name={{.Name}} - --port=8001 - --rest_port=8888 - --model_path=/mnt/models - --file_system_poll_wait_seconds=0 - --grpc_bind_address=0.0.0.0 - --rest_bind_address=0.0.0.0 - --target_device=AUTO - --metrics_enable image: quay.io/modh/openvino_model_server@sha256:6c7795279f9075bebfcd9aecbb4a4ce4177eec41fb3f3e1f1079ce6309b7ae45 name: kserve-container ports: - containerPort: 8888 protocol: TCP multiModel: false protocolVersions: - v2 - grpc-v2 supportedModelFormats: - autoSelect: true name: openvino_ir version: opset13 - name: onnx version: \"1\" - autoSelect: true name: tensorflow version: \"1\" - autoSelect: true name: tensorflow version: \"2\" - autoSelect: true name: paddle version: \"2\" - autoSelect: true name: pytorch version: \"2\"",
"oc apply -f <serving run time file name> -n <namespace>",
"apiVersion: serving.kserve.io/v1beta1 kind: InferenceService metadata: annotations: serving.knative.openshift.io/enablePassthrough: \"true\" sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" serving.kserve.io/deploymentMode: RawDeployment name: <InferenceService-Name> spec: predictor: scaleMetric: minReplicas: 1 scaleTarget: canaryTrafficPercent: serviceAccountName: <serviceAccountName> model: env: [] volumeMounts: [] modelFormat: name: onnx runtime: ovms-runtime storageUri: s3://<bucket_name>/<model_directory_path> resources: requests: memory: 5Gi volumes: []",
"oc apply -f <InferenceService CR file name> -n <namespace>",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE <isvc_name>-predictor-xxxxx-2mr5l 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m",
"oc -n <namespace> port-forward pod/<pod-name> <local_port>:<remote_port>",
"apiVersion: serving.kserve.io/v1alpha1 kind: ServingRuntime metadata: name: triton-kserve-rest labels: opendatahub.io/dashboard: \"true\" spec: annotations: prometheus.kserve.io/path: /metrics prometheus.kserve.io/port: \"8002\" containers: - args: - tritonserver - --model-store=/mnt/models - --grpc-port=9000 - --http-port=8080 - --allow-grpc=true - --allow-http=true image: nvcr.io/nvidia/tritonserver@sha256:xxxxx name: kserve-container resources: limits: cpu: \"1\" memory: 2Gi requests: cpu: \"1\" memory: 2Gi ports: - containerPort: 8080 protocol: TCP protocolVersions: - v2 - grpc-v2 supportedModelFormats: - autoSelect: true name: tensorrt version: \"8\" - autoSelect: true name: tensorflow version: \"1\" - autoSelect: true name: tensorflow version: \"2\" - autoSelect: true name: onnx version: \"1\" - name: pytorch version: \"1\" - autoSelect: true name: triton version: \"2\" - autoSelect: true name: xgboost version: \"1\" - autoSelect: true name: python version: \"1\"",
"apiVersion: serving.kserve.io/v1alpha1 kind: ServingRuntime metadata: name: triton-kserve-grpc labels: opendatahub.io/dashboard: \"true\" spec: annotations: prometheus.kserve.io/path: /metrics prometheus.kserve.io/port: \"8002\" containers: - args: - tritonserver - --model-store=/mnt/models - --grpc-port=9000 - --http-port=8080 - --allow-grpc=true - --allow-http=true image: nvcr.io/nvidia/tritonserver@sha256:xxxxx name: kserve-container ports: - containerPort: 9000 name: h2c protocol: TCP volumeMounts: - mountPath: /dev/shm name: shm resources: limits: cpu: \"1\" memory: 2Gi requests: cpu: \"1\" memory: 2Gi protocolVersions: - v2 - grpc-v2 supportedModelFormats: - autoSelect: true name: tensorrt version: \"8\" - autoSelect: true name: tensorflow version: \"1\" - autoSelect: true name: tensorflow version: \"2\" - autoSelect: true name: onnx version: \"1\" - name: pytorch version: \"1\" - autoSelect: true name: triton version: \"2\" - autoSelect: true name: xgboost version: \"1\" - autoSelect: true name: python version: \"1\" volumes: - emptyDir: null medium: Memory sizeLimit: 2Gi name: shm",
"apiVersion: serving.kserve.io/v1alpha1 kind: ServingRuntime metadata: name: kserve-triton annotations: openshift.io/display-name: Triton ServingRuntime",
"apiVersion: serving.kserve.io/v1alpha1 kind: InferenceService metadata: name: my-inference-service spec: predictor: annotations: serving.knative.dev/progress-deadline: 30m",
"get -o json inferenceservice <inferenceservicename/modelname> -n <projectname>",
"cd USD(mktemp -d)",
"mkdir -p models/1",
"DOWNLOAD_URL=https://github.com/onnx/models/raw/main/validated/vision/classification/mobilenet/model/mobilenetv2-7.onnx curl -L USDDOWNLOAD_URL -O --output-dir models/1/",
"tree",
". ├── Containerfile └── models └── 1 └── mobilenetv2-7.onnx",
"FROM registry.access.redhat.com/ubi9/ubi-micro:latest COPY --chown=0:0 models /models RUN chmod -R a=rX /models nobody user USER 65534",
"build --format=oci -t quay.io/<user_name>/<repository_name>:<tag_name> . push quay.io/<user_name>/<repository_name>:<tag_name>",
"new-project oci-model-example",
"process -n redhat-ods-applications -o yaml kserve-ovms | oc apply -f -",
"get servingruntimes",
"NAME DISABLED MODELTYPE CONTAINERS AGE kserve-ovms openvino_ir kserve-container 1m",
"apiVersion: serving.kserve.io/v1beta1 kind: InferenceService metadata: name: sample-isvc-using-oci spec: predictor: model: runtime: kserve-ovms # Ensure this matches the name of the ServingRuntime resource modelFormat: name: onnx storageUri: oci://quay.io/<user_name>/<repository_name>:<tag_name> resources: requests: memory: 500Mi cpu: 100m # nvidia.com/gpu: \"1\" # Only required if you have GPUs available and the model and runtime will use it limits: memory: 4Gi cpu: 500m # nvidia.com/gpu: \"1\" # Only required if you have GPUs available and the model and runtime will use it",
"apiVersion: serving.kserve.io/v1beta1 kind: InferenceService metadata: name: sample-isvc-using-private-oci spec: predictor: model: runtime: kserve-ovms # Ensure this matches the name of the ServingRuntime resource modelFormat: name: onnx storageUri: oci://quay.io/<user_name>/<repository_name>:<tag_name> resources: requests: memory: 500Mi cpu: 100m # nvidia.com/gpu: \"1\" # Only required if you have GPUs available and the model and runtime will use it limits: memory: 4Gi cpu: 500m # nvidia.com/gpu: \"1\" # Only required if you have GPUs available and the model and runtime will use it imagePullSecrets: # Specify image pull secrets to use for fetching container images, including OCI model images - name: <pull-secret-name>",
"get inferenceservice",
"`VLLM_SKIP_WARMUP=\"true\"`",
"--distributed-executor-backend=mp 1 --max-model-len=6144 2",
"--distributed-executor-backend=mp 1",
"--distributed-executor-backend=mp 1 --tensor-parallel-size=4 2 --max-model-len=6448 3",
"--distributed-executor-backend=mp 1 --max-model-len=15344 2",
"curl -q -X 'POST' \"https://<inference_endpoint_url>:443/v1/chat/completions\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"{ \\\"model\\\": \\\"<model_name>\\\", \\\"prompt\\\": \\\"<prompt>\", \\\"max_tokens\\\": <max_tokens>, \\\"temperature\\\": <temperature> }\"",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"new-project kserve-demo",
"apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: granite-8b-code-base-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: <model size> storageClassName: <storage class>",
"apiVersion: v1 kind: Pod metadata: name: download-granite-8b-code labels: name: download-granite-8b-code spec: volumes: - name: model-volume persistentVolumeClaim: claimName: granite-8b-code-claim restartPolicy: Never initContainers: - name: fix-volume-permissions image: quay.io/quay/busybox@sha256:92f3298bf80a1ba949140d77987f5de081f010337880cd771f7e7fc928f8c74d command: [\"sh\"] args: [\"-c\", \"mkdir -p /mnt/models/USD(MODEL_PATH) && chmod -R 777 /mnt/models\"] 1 volumeMounts: - mountPath: \"/mnt/models/\" name: model-volume env: - name: MODEL_PATH value: <model path> 2 containers: - resources: requests: memory: 40Gi name: download-model imagePullPolicy: IfNotPresent image: quay.io/opendatahub/kserve-storage-initializer:v0.14 3 args: - 's3://USD(BUCKET_NAME)/USD(MODEL_PATH)/' - /mnt/models/USD(MODEL_PATH) env: - name: AWS_ACCESS_KEY_ID value: <id> 4 - name: AWS_SECRET_ACCESS_KEY value: <secret> 5 - name: BUCKET_NAME value: <bucket_name> 6 - name: MODEL_PATH value: <model path> 7 - name: S3_USE_HTTPS value: \"1\" - name: AWS_ENDPOINT_URL value: <AWS endpoint> 8 - name: awsAnonymousCredential value: 'false' - name: AWS_DEFAULT_REGION value: <region> 9 - name: S3_VERIFY_SSL value: 'true' 10 volumeMounts: - mountPath: \"/mnt/models/\" name: model-volume",
"process vllm-multinode-runtime-template -n redhat-ods-applications|oc apply -n kserve-demo -f -",
"apiVersion: serving.kserve.io/v1beta1 kind: InferenceService metadata: annotations: serving.kserve.io/deploymentMode: RawDeployment serving.kserve.io/autoscalerClass: external name: <inference service name> spec: predictor: model: modelFormat: name: vLLM runtime: vllm-multinode-runtime storageUri: pvc://<pvc name>/<model path> workerSpec: {}",
"Get pod name podName=USD(oc get pod -l app=isvc.granite-8b-code-base-pvc-predictor --no-headers|cut -d' ' -f1) workerPodName=USD(oc get pod -l app=isvc.granite-8b-code-base-pvc-predictor-worker --no-headers|cut -d' ' -f1) wait --for=condition=ready pod/USD{podName} --timeout=300s Check the GPU memory size for both the head and worker pods: echo \"### HEAD NODE GPU Memory Size\" exec USDpodName -- nvidia-smi echo \"### Worker NODE GPU Memory Size\" exec USDworkerPodName -- nvidia-smi",
"+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA A10G On | 00000000:00:1E.0 Off | 0 | | 0% 33C P0 71W / 300W |19031MiB / 23028MiB <1>| 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA A10G On | 00000000:00:1E.0 Off | 0 | | 0% 30C P0 69W / 300W |18959MiB / 23028MiB <2>| 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+",
"wait --for=condition=ready pod/USD{podName} -n USDDEMO_NAMESPACE --timeout=300s export MODEL_NAME=granite-8b-code-base-pvc",
"NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE granite-8b-code-base-pvc http://granite-8b-code-base-pvc.default.example.com",
"wait --for=condition=ready pod/USD{podName} -n vllm-multinode --timeout=300s port-forward USDpodName 8080:8080 & curl http://localhost:8080/v1/completions -H \"Content-Type: application/json\" -d \"{ 'model': \"USDMODEL_NAME\", 'prompt': 'At what temperature does Nitrogen boil?', 'max_tokens': 100, 'temperature': 0 }\"",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: logLevel: debug retention: 15d",
"oc apply -f uwm-cm-conf.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true",
"oc apply -f uwm-cm-enable.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: istiod-monitor namespace: istio-system spec: targetLabels: - app selector: matchLabels: istio: pilot endpoints: - port: http-monitoring interval: 30s",
"oc apply -f istiod-monitor.yaml",
"servicemonitor.monitoring.coreos.com/istiod-monitor created",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: istio-proxies-monitor namespace: istio-system spec: selector: matchExpressions: - key: istio-prometheus-ignore operator: DoesNotExist podMetricsEndpoints: - path: /stats/prometheus interval: 30s",
"oc apply -f istio-proxies-monitor.yaml",
"podmonitor.monitoring.coreos.com/istio-proxies-monitor created",
"sum(increase(vllm:request_success_total{namespace= USD{namespace} ,model_name= USD{model_name} }[USD{rate_interval}]))",
"sum(increase(tgi_request_success{namespace=USD{namespace}, pod=~ USD{model_name}-predictor-.* }[USD{rate_interval}]))",
"sum(increase(predict_rpc_count_total{namespace= USD{namespace} ,code= OK ,model_id= USD{model_name} }[USD{rate_interval}]))",
"sum(increase(ovms_requests_success{namespace= USD{namespace} ,name= USD{model_name} }[USD{rate_interval}]))",
"disablePerformanceMetrics:false disableKServeMetrics:false",
"apply -k overlays/<overlay-name>",
"get route -n <namespace> grafana-route -o jsonpath='{.spec.host}'",
"grafana-<namespace>.apps.example-openshift.com",
"--speculative-model=[ngram] --num-speculative-tokens=<NUM_SPECULATIVE_TOKENS> --ngram-prompt-lookup-max=<NGRAM_PROMPT_LOOKUP_MAX> --use-v2-block-manager",
"--port=8080 --served-model-name={{.Name}} --distributed-executor-backend=mp --model=/mnt/models/<path_to_original_model> --speculative-model=/mnt/models/<path_to_speculative_model> --num-speculative-tokens=<NUM_SPECULATIVE_TOKENS> --use-v2-block-manager",
"--trust-remote-code",
"curl -v https://<inference_endpoint_url>:443/v1/chat/completions -H \"Content-Type: application/json\" -H \"Authorization: Bearer <token>\"",
"curl -v https://<inference_endpoint_url>:443/v1/chat/completions -H \"Content-Type: application/json\" -H \"Authorization: Bearer <token>\" -d '{\"model\":\"<model_name>\", \"messages\": [{\"role\":\"<role>\", \"content\": [{\"type\":\"text\", \"text\":\"<text>\" }, {\"type\":\"image_url\", \"image_url\":\"<image_url_link>\" } ] } ] }'",
"spec: containers: env: - name: BATCH_SAFETY_MARGIN value: 30 - name: ESTIMATE_MEMORY_BATCH value: 8",
"disableNIMModelServing: false",
"get servingruntime -n <namespace>",
"get servingruntime -n <namespace> <servingruntime-name> -o json | jq '.metadata.annotations'",
"patch servingruntime -n <namespace> <servingruntime-name> --type json --patch '[{\"op\": \"add\", \"path\": \"/metadata/annotations\", \"value\": {\"runtimes.opendatahub.io/nvidia-nim\": \"true\"}}]'",
"servingruntime.serving.kserve.io/nim-serving-runtime patched",
"patch servingruntime -n <project-namespace> <runtime-name> --type json --patch '[{\"op\": \"add\", \"path\": \"/metadata/annotations/runtimes.opendatahub.io~1nvidia-nim\", \"value\": \"true\"}]'",
"servingruntime.serving.kserve.io/nim-serving-runtime patched",
"get servingruntime -n <namespace> <servingruntime-name> -o json | jq '.metadata.annotations'",
"\"runtimes.opendatahub.io/nvidia-nim\": \"true\"",
"get inferenceservice -n <namespace>",
"get inferenceservice -n <namespace> <inferenceservice-name> -o json | jq '.spec.predictor.annotations'",
"patch inferenceservice -n <namespace> <inference-name> --type json --patch '[{\"op\": \"add\", \"path\": \"/spec/predictor/annotations\", \"value\": {\"prometheus.io/path\": \"/metrics\", \"prometheus.io/port\": \"8000\"}}]'",
"inferenceservice.serving.kserve.io/nim-serving-runtime patched",
"patch inferenceservice -n <namespace> <inference-service-name> --type json --patch '[{\"op\": \"add\", \"path\": \"/spec/predictor/annotations/prometheus.io~1path\", \"value\": \"/metrics\"}, {\"op\": \"add\", \"path\": \"/spec/predictor/annotations/prometheus.io~1port\", \"value\": \"8000\"}]'",
"inferenceservice.serving.kserve.io/nim-serving-runtime patched",
"get inferenceservice -n <namespace> <inferenceservice-name> -o json | jq '.spec.predictor.annotations'",
"{ \"prometheus.io/path\": \"/metrics\", \"prometheus.io/port\": \"8000\" }",
"disableKServeMetrics: false",
"disableKServeMetrics: false"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/serving_models/serving-large-models_serving-large-models |
3.3. XFS Quota Management | 3.3. XFS Quota Management The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas control or report on usage of these items on a user, group, or directory or project level. Also, note that while user, group, and directory or project quotas are enabled independently, group and project quotas are mutually exclusive. When managing on a per-directory or per-project basis, XFS manages the disk usage of directory hierarchies associated with a specific project. In doing so, XFS recognizes cross-organizational "group" boundaries between projects. This provides a level of control that is broader than what is available when managing quotas for users or groups. XFS quotas are enabled at mount time, with specific mount options. Each mount option can also be specified as noenforce ; this allows usage reporting without enforcing any limits. Valid quota mount options are: uquota / uqnoenforce : User quotas gquota / gqnoenforce : Group quotas pquota / pqnoenforce : Project quota Once quotas are enabled, the xfs_quota tool can be used to set limits and report on disk usage. By default, xfs_quota is run interactively, and in basic mode . Basic mode subcommands simply report usage, and are available to all users. Basic xfs_quota subcommands include: quota username/userID Show usage and limits for the given username or numeric userID df Shows free and used counts for blocks and inodes. In contrast, xfs_quota also has an expert mode . The subcommands of this mode allow actual configuration of limits, and are available only to users with elevated privileges. To use expert mode subcommands interactively, use the following command: Expert mode subcommands include: report /path Reports quota information for a specific file system. limit Modify quota limits. For a complete list of subcommands for either basic or expert mode, use the subcommand help . All subcommands can also be run directly from a command line using the -c option, with -x for expert subcommands. Example 3.2. Display a Sample Quota Report For example, to display a sample quota report for /home (on /dev/blockdevice ), use the command xfs_quota -x -c 'report -h' /home . This displays output similar to the following: To set a soft and hard inode count limit of 500 and 700 respectively for user john , whose home directory is /home/john , use the following command: In this case, pass mount_point which is the mounted xfs file system. By default, the limit subcommand recognizes targets as users. When configuring the limits for a group, use the -g option (as in the example). Similarly, use -p for projects. Soft and hard block limits can also be configured using bsoft or bhard instead of isoft or ihard . Example 3.3. Set a Soft and Hard Block Limit For example, to set a soft and hard block limit of 1000m and 1200m, respectively, to group accounting on the /target/path file system, use the following command: Note The commands bsoft and bhard count by the byte. Important While real-time blocks ( rtbhard / rtbsoft ) are described in man xfs_quota as valid units when setting quotas, the real-time sub-volume is not enabled in this release. As such, the rtbhard and rtbsoft options are not applicable. Setting Project Limits With XFS file system, you can set quotas on individual directory hierarchies in the file system that are known as managed trees. Each managed tree is uniquely identified by a project ID and an optional project name . Add the project-controlled directories to /etc/projects . For example, the following adds the /var/log path with a unique ID of 11 to /etc/projects . Your project ID can be any numerical value mapped to your project. Add project names to /etc/projid to map project IDs to project names. For example, the following associates a project called logfiles with the project ID of 11 as defined in the step. Initialize the project directory. For example, the following initializes the project directory /var : Configure quotas for projects with initialized directories: Generic quota configuration tools ( quota , repquota , and edquota for example) may also be used to manipulate XFS quotas. However, these tools cannot be used with XFS project quotas. Important Red Hat recommends the use of xfs_quota over all other available tools. For more information about setting XFS quotas, see man xfs_quota , man projid(5) , and man projects(5) . | [
"xfs_quota -x",
"User quota on /home (/dev/blockdevice) Blocks User ID Used Soft Hard Warn/Grace ---------- --------------------------------- root 0 0 0 00 [------] testuser 103.4G 0 0 00 [------]",
"xfs_quota -x -c 'limit isoft=500 ihard=700 john' /home/",
"xfs_quota -x -c 'limit -g bsoft=1000m bhard=1200m accounting' /target/path",
"echo 11:/var/log >> /etc/projects",
"echo logfiles:11 >> /etc/projid",
"xfs_quota -x -c 'project -s logfiles' /var",
"xfs_quota -x -c 'limit -p bhard=lg logfiles' /var"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/xfsquota |
Chapter 4. Troubleshooting | Chapter 4. Troubleshooting This chapter contains logging and support information to assist with troubleshooting your Red Hat OpenStack Platform deployment. 4.1. Support If client commands fail or you run into other issues, contact Red Hat Technical Support with a description of what happened, the full console output, all log files referenced in the console output, and an sosreport from the node that is (or might be) in trouble. For example, if you encounter a problem on the compute level, run sosreport on the Nova node, or if it is a networking issue, run the utility on the Neutron node. For general deployment issues, it is best to run sosreport on the cloud controller. For information about the sosreport command ( sos package), refer to What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later . Check also the /var/log/messages file for any hints. 4.2. Troubleshoot Identity Client (keystone) Connectivity Problems When the Identity client ( keystone ) is unable to contact the Identity service it returns an error: To debug the issue check for these common causes: Identity service is down Identity Service now runs within httpd.service . On the system hosting the Identity service, check the service status: If the service is not active then log in as the root user and start it. Firewall is not configured properly The firewall might not be configured to allow TCP traffic on ports 5000 and 35357 . If so, see Managing the Overcloud Firewall in the Advanced Overcloud Customization guide for instructions on checking your firewall settings and defining custom rules. Service Endpoints not defined correctly On the system hosting the Identity service check that the endpoints are defined correctly. Obtain the administration token: Determine the correct administration endpoint for the Identity service: Replace IP with the IP address or host name of the system hosting the Identity service. Replace VERSION with the API version ( v2.0 , or v3 ) that is in use. Unset any pre-defined Identity service related environment variables: Use the administration token and endpoint to authenticate with the Identity service. Confirm that the Identity service endpoint is correct. For example: Verify that the listed publicurl , internalurl , and adminurl for the Identity service are correct. In particular ensure that the IP addresses and port numbers listed within each endpoint are correct and reachable over the network. If these values are incorrect, add the correct endpoint and remove any incorrect endpoints using the endpoint delete action of the openstack command. For example: Replace TOKEN and ENDPOINT with the values identified previously. Replace ID with the identity of the endpoint to remove as listed by the endpoint-list action. 4.3. Troubleshoot OpenStack Networking Issues This section discusses the different commands you can use and procedures you can follow to troubleshoot the OpenStack Networking service issues. Debugging Networking Device Use the ip a command to display all the physical and virtual devices. Use the ovs-vsctl show command to display the interfaces and bridges in a virtual switch. Use the ovs-dpctl show command to show datapaths on the switch. Tracking Networking Packets Use the tcpdump command to see where packets are not getting through. Replace INTERFACE with the name of the network interface to see where the packets are not getting through. The interface name can be the name of the bridge or host Ethernet device. The -e flag ensures that the link-level header is dumped (in which the vlan tag will appear). The -w flag is optional. You can use it only if you want to write the output to a file. If not, the output is written to the standard output ( stdout ). For more information about tcpdump , refer to its manual page by running man tcpdump . Debugging Network Namespaces Use the ip netns list command to list all known network namespaces. Use the ip netns exec command to show routing tables inside specific namespaces. Start the ip netns exec command in a bash shell so that subsequent commands can be invoked without the ip netns exec command. 4.4. Troubleshoot Networks and Routes Tab Display Issues in the Dashboard The Networks and Routers tabs only appear in the dashboard when the environment is configured to use OpenStack Networking. In particular note that by default the PackStack utility currently deploys Nova Networking and as such in environments deployed in this manner the tab will not be visible. If OpenStack Networking is deployed in the environment but the tabs still do not appear ensure that the service endpoints are defined correctly in the Identity service, that the firewall is allowing access to the endpoints, and that the services are running. 4.5. Troubleshoot Instance Launching Errors in the Dashboard When using the dashboard to launch instances if the operation fails, a generic ERROR message is displayed. Determining the actual cause of the failure requires the use of the command line tools. Use the nova list command to locate the unique identifier of the instance. Then use this identifier as an argument to the nova show command. One of the items returned will be the error condition. The most common value is NoValidHost . This error indicates that no valid host was found with enough available resources to host the instance. To work around this issue, consider choosing a smaller instance size or increasing the overcommit allowances for your environment. Note To host a given instance, the compute node must have not only available CPU and RAM resources but also enough disk space for the ephemeral storage associated with the instance. 4.6. Troubleshoot Keystone v3 Dashboard Authentication django_openstack_auth is a pluggable Django authentication back end, that works with Django's contrib.auth framework, to authenticate a user against the OpenStack Identity service API. Django_openstack_auth uses the token object to encapsulate user and Keystone related information. The dashboard uses the token object to rebuild the Django user object. The token object currently stores: Keystone token User information Scope Roles Service catalog The dashboard uses Django's sessions framework for handling user session data. The following is a list of numerous session back ends available, which are controlled through the SESSION_ENGINE setting in your local_settings.py file: Local Memory Cache Memcached Database Cached Database Cookies In some cases, particularly when a signed cookie session back end is used and, when having many or all services enabled all at once, the size of cookies can reach its limit and the dashboard can fail to log in. One of the reasons for the growth of cookie size is the service catalog. As more services are registered, the bigger the size of the service catalog would be. In such scenarios, to improve the session token management, include the following configuration settings for logging in to the dashboard, especially when using Keystone v3 authentication. In /usr/share/openstack-dashboard/openstack_dashboard/settings.py add the following configuration: In the same file, change SESSION_ENGINE to: Connect to the database service using the mysql command, replacing USER with the user name by which to connect. The USER must be a root user (or at least as a user with the correct permission: create db). Create the Horizon database. Exit the mysql client. Change to the openstack_dashboard directory and sync the database using: You do not need to create a superuser, so answer n to the question. Restart Apache http server. For Red Hat Enterprise Linux: 4.6.1. OpenStack Dashboard - Red Hat Access Tab The Red Hat Access tab, which is part of the OpenStack dashboard, allows you to search for and read articles or solutions from the Red Hat Customer Portal, view logs from your instances and diagnose them, and work with your customer support cases. Figure 4.1. Red Hat Access Tab. Important You must be logged in to the Red Hat Customer Portal in the browser in order to be able to use the functions provided by the Red Hat Access tab. If you are not logged in, you can do so now: Click Log In . Enter your Red Hat login. Enter your Red Hat password. Click Sign in . This is how the form looks: Figure 4.2. Logging in to the Red Hat Customer Portal. If you do not log in now, you will be prompted for your Red Hat login and password when you use one of the functions that require authentication. 4.6.1.1. Search You can search for articles and solutions from Red Hat Customer Portal by entering one or more search keywords. The titles of the relevant articles and solutions will then be displayed. Click on a title to view the given article or solution: Figure 4.3. Example of Search Results on the Red Hat Access Tab. 4.6.1.2. Logs Here you can read logs from your OpenStack instances: Figure 4.4. Instance Logs on the Red Hat Access Tab. Find the instance of your choice in the table. If you have many instances, you can filter them by name, status, image ID, or flavor ID. Click View Log in the Actions column for the instance to check. When an instance log is displayed, you can click Red Hat Diagnose to get recommendations regarding its contents: Figure 4.5. Instance Logs on the Red Hat Access Tab. If none of the recommendations are useful or a genuine problem has been logged, click Open a New Support Case to report the problem to Red Hat Support. 4.6.1.3. Support The last option in the Red Hat Access Tab allows you to search for your support cases at the Red Hat Customer Portal: Figure 4.6. Search for Support Cases. You can also open a new support case by clicking the appropriate button and filling out the form on the following page: Figure 4.7. Open a New Support Case. | [
"Unable to communicate with identity service: [Errno 113] No route to host. (HTTP 400)",
"systemctl status httpd.service",
"systemctl start httpd.service",
"grep admin_token /etc/keystone/keystone.conf admin_token = 91f0866234a64fc299db8f26f8729488",
"http:// IP :35357/ VERSION",
"unset OS_USERNAME OS_TENANT_NAME OS_PASSWORD OS_AUTH_URL",
"openstack endpoint list --os-token=91f0556234a64fc299db8f26f8729488 --os-url=https://osp.lab.local:35357/v3/ --os-identity-api-version 3",
"openstack endpoint delete 2d32fa6feecc49aab5de538bdf7aa018 --os-token=91f0866234a64fc299db8f26f8729488 --os-url=https://osp.lab.local:35357/v3/ --os-identity-api-version 3",
"tcpdump -n -i INTERFACE -e -w FILENAME",
"ip netns exec NAMESPACE_ID bash route -n",
"DATABASES = { default : { ENGINE : django.db.backends.mysql , NAME : horizondb , USER : User Name , PASSWORD : Password , HOST : localhost , } }",
"SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db'",
"mysql -u USER -p",
"mysql > create database horizondb;",
"mysql > exit",
"cd /usr/share/openstack-dashboard/openstack_dashboard ./manage.py syncdb",
"systemctl restart httpd"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/logging_monitoring_and_troubleshooting_guide/troubleshooting |
Chapter 6. Installing JBoss EAP by using the RPM installation method | Chapter 6. Installing JBoss EAP by using the RPM installation method You can install JBoss EAP by using RPM packages on supported installations of Red Hat Enterprise Linux 8, and Red Hat Enterprise Linux 9. 6.1. Subscribing to the JBoss EAP software repository If you want to install JBoss EAP by using the RPM installation method, you must subscribe to the Red Hat Enterprise Linux Server base software repository, and a minor JBoss EAP repository. Prerequisites You have set up an account on the Red Hat Customer Portal. Your JBoss EAP host is running on a supported operating system. You have subscribed to the Red Hat Enterprise Linux Server base software repository. You have administrator privileges on the server. Procedure Enter the Red Hat Subscription Manager. Important To subscribe to the Red Hat Enterprise Linux server base repository and a minor JBoss EAP repository, enter one of the following commands: Replace EAP_MINOR_VERSION with your intended JBoss EAP minor version. For example, 8.0. Additional resources Red Hat JBoss Enterprise Application Platform JBoss EAP 8 Supported Configurations Setting up an account on the Red Hat customer portal Red Hat Subscription Management How to subscribe through the subscription manager For more information about changing the JAVA_HOME property, see the RPM Service Configuration Properties section of the Configuration Guide. 6.2. Installing JBoss EAP by using the RPM installation method Prerequisites You have set up an account on the Red Hat Customer Portal. Your JBoss EAP host is running on a supported operating system. You have administrator privileges on the server. You have subscribed to the Red Hat Enterprise Linux Server base software repository. You have subscribed to the JBoss EAP8 software repository. Procedure Install JBoss EAP and JDK 17. Install JBoss EAP and JDK 11. JDK 11 is available for Red Hat Enterprise Linux 8 and later: Red Hat Enterprise Linux 8: Note If the specified version of the JDK is not already installed on the system, the groupinstall command automatically installs this version of the JDK. If a different version of the JDK is already installed, the system contains multiple installed JDK versions after you run the preceding command. If multiple JDK versions are installed on your system after you run the groupinstall command, check which JDK version JBoss EAP is using. By default, JBoss EAP uses the system default JDK. Modify the default JDK in either of the following ways: Change system wide configuration by using the alternatives command: Change the JDK that uses JBoss EAP by using the JAVA_HOME property. Your installation is complete. The default EAP_HOME path for the RPM installation is /opt/rh/eap8/root/usr/share/wildfly . Important When you install JBoss EAP by using RPM packages, you cannot configure multiple domain or host controllers on the same machine. Additional resources Setting up the EAP_HOME variable, in the JBoss EAP Installation Guide . Subscribing to a Minor JBoss EAP repository, in the JBoss EAP Installation Guide . For more information about changing the JAVA_HOME property, see the RPM Service Configuration Properties section of the Configuration Guide. 6.3. Configuring JBoss EAP RPM installation as a service on RHEL You can configure the Red Hat Packet Manager (RPM) installation to run as a service in Red Hat Enterprise Linux (RHEL). An RPM installation of JBoss EAP installs everything that is required to run JBoss EAP as a service Run the appropriate command for your RHEL, as demonstrated in this procedure. Replace EAP_SERVICE_NAME with either eap8-standalone for a standalone JBoss EAP server, or eap8-domain for a managed domain. Prerequisites You have installed JBoss EAP as an RPM installation. You have administrator privileges on the server. Procedure For Red Hat Enterprise Linux 8 and later: Additional resources To start or stop an RPM installation of JBoss EAP on demand, see the RPM instructions in the JBoss EAP Configuration Guide. See the RPM service configuration files appendix in the JBoss EAP Configuration Guide for further details and options. 6.4. Changing the software subscription from one JBoss EAP repository to another You can change the software subscription from one JBoss EAP repository to another based on the following conditions: If you are changing to the minor JBoss EAP version, you can change from a minor repository to another minor repository. For example, you can change from JBoss EAP 8.0 to JBoss EAP 8.1, but you cannot change from JBoss EAP 8.0 to JBoss EAP 8.2. Prerequisites You have installed JBoss EAP by using the RPM installation method. You have chosen a repository. You have applied all the applicable updates to your JBoss EAP application. You have administrator privileges on the server. Procedure Unsubscribe from the existing repository and subscribe to the new repository by using the Red Hat Subscription Manager. 6.5. Uninstalling JBoss EAP using the RPM installation method Note To avoid potential issues, do not uninstall a JBoss EAP installation that you have installed from RPM packages. Because of the nature of RPM package management, uninstalling JBoss EAP might not completely remove all installed packages and dependencies. Uninstalling JBoss EAP might also leave the system in an inconsistent state because of missing package dependencies. | [
"subscription-manager repos --enable=jb-eap-EAP_MINOR_VERSION-for-rhel-RHEL_VERSION-ARCH-rpms",
"dnf groupinstall jboss-eap8",
"dnf groupinstall jboss-eap8-jdk11",
"alternatives --config java",
"systemctl enable EAP_SERVICE_NAME.service",
"subscription-manager repos --disable=EXISTING_REPOSITORY --enable=NEW_REPOSITORY"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_enterprise_application_platform_installation_methods/assembly_installing-jboss-eap-using-the-rpm-installtion-method_default |
7.62. gcc | 7.62. gcc 7.62.1. RHBA-2013:0420 - gcc bug fix update Updated gcc packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The gcc packages provide compilers for C, C++, Java, Fortran, Objective C, and Ada 95 GNU, as well as related support libraries. Bug Fixes BZ#801144 Due to the incorrect size of a pointer in GCC GNAT code, GNAT used an incorrect function of the libgcc library when compiling 32-bit Ada binaries on PowerPC architecture. Consequently, these programs could not be linked and the compilation failed. This update fixes the problem so that the sizeof operator now returns the correct size of a pointer, and the appropriate function from libgcc is called. GNAT compiles Ada binaries as expected in this scenario. BZ#808590 The Standard Template Library (STL) contained an incomplete move semantics implementation, which could cause GCC to generate incorrect code. The incorrect headers have been fixed so that GCC now produce the expected code when depending on move semantics. BZ#819100 GCC did not, under certain circumstances, handle generating a CPU instruction sequence that would be independent of indexed addressing on PowerPC architecture. As a consequence, an internal compiler error occurred if the "__builtin_bswap64" built-in function was called with the "-mcpu=power6" option. This update corrects the relevant code so that GCC now generates an alternate instruction sequence that does not depend on indexed addressing in this scenario. BZ#821901 A bug in converting the exception handling region could cause an internal compiler error to occur when compiling profile data with the "-fprofile-use" and "-freorder-basic-blocks-and-partition" options. This update fixes the erroneous code and the compilation of profile data now proceeds as expected in this scenario. BZ# 826882 Previously, GCC did not properly handle certain situations when an enumeration was type cast using the static_cast operator. Consequently, an enumeration item could have been assigned an integer value greater than the highest value of the enumeration's range. If the compiled code contained testing conditions using such enumerations, those checks were incorrectly removed from the code during code optimization. With this update, GCC was modified to handle enumeration type casting properly and C++ now no longer removes the mentioned checks. BZ#831832 Previously, when comparing the trees equality, the members of a union or structure were not handled properly in the C++ compiler. This led to an internal compiler error. This update modifies GCC so that unions and structures are now handled correctly and code that uses tree equality comparing is now compiled successfully. BZ# 867878 GCC previously processed the "srak" instructions without the z196 flag, which enables a compiler to work with these instructions. Consequently, some binaries, such as Firefox, could not be compiled on IBM System z and IBM S/390 architectures. With this update, GCC has been modified to support the z196 flag for the srak instructions, and binaries requiring these instructions can now be compiled successfully on IBM System z and IBM S/390 architectures. All users of gcc are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gcc |
8.242. tomcatjss | 8.242. tomcatjss 8.242.1. RHBA-2014:1550 - tomcatjss bug fix update An updated tomcatjss package that fixes one bug is now available for Red Hat Enterprise Linux 6. The tomcatjss package provides a Java Secure Socket Extension (JSSE) implementation using Java Security Services (JSS) for Tomcat 6. Bug Fix BZ# 1084224 Previously, the tomcatjss package missed the strictCiphers implementation. Consequently, the user could not disable weaker ciphers and enable the stronger ciphers. With this update, strictCiphers has been implemented to tomcatjss. Users of tomcatjss are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/tomcatjss |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.