title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
2.7. GNOME Power Manager | 2.7. GNOME Power Manager GNOME Power Manager is a daemon that is installed as part of the GNOME desktop environment. Much of the power-management functionality that GNOME Power Manager provided in earlier versions of Red Hat Enterprise Linux has become part of the DeviceKit-power tool in Red Hat Enterprise Linux 6, renamed to UPower in Red Hat Enterprise Linux 7 (see Section 2.6, "UPower" ). However, GNOME Power Manager remains a front end for that functionality. Through an applet in the system tray, GNOME Power Manager notifies you of changes in your system's power status; for example, a change from battery to AC power. It also reports battery status, and warns you when battery power is low. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/gnome-power-manager |
Chapter 1. Release notes for the Red Hat OpenShift distributed tracing platform | Chapter 1. Release notes for the Red Hat OpenShift distributed tracing platform 1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis You can use the Red Hat OpenShift distributed tracing platform (Tempo) in combination with the Red Hat build of OpenTelemetry . Note Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat's support. 1.2. Release notes for Red Hat OpenShift distributed tracing platform 3.5 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.2.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) 3.5 is provided through the Tempo Operator 0.15.3 . Note The Red Hat OpenShift distributed tracing platform (Tempo) 3.5 is based on the open source Grafana Tempo 2.7.1. 1.2.1.1. New features and enhancements This update introduces the following enhancements: With this update, you can configure the Tempo backend services to report the internal tracing data by using the OpenTelemetry Protocol (OTLP). With this update, the traces.span.metrics namespace becomes the default metrics namespace on which the Jaeger query retrieves the Prometheus metrics. The purpose of this change is to provide compatibility with the OpenTelemetry Collector version 0.109.0 and later where this namespace is the default. Customers who are still using an earlier OpenTelemetry Collector version can configure this namespace by adding the following field and value: spec.template.queryFrontend.jaegerQuery.monitorTab.redMetricsNamespace: "" . 1.2.1.2. Bug fixes This update introduces the following bug fix: Before this update, the Tempo Operator failed when the TempoStack custom resource had the spec.storage.tls.enabled field set to true and used an Amazon S3 object store with the Security Token Service (STS) authentication. With this update, such a TempoStack custom resource configuration does not cause the Tempo Operator to fail. 1.2.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Warning Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see the following resources: Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (distributed tracing platform (Tempo) documentation) Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase solution) Note The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is based on the open source Jaeger release 1.65.0. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is provided through the Red Hat OpenShift distributed tracing platform Operator 1.65.0 . Important Jaeger does not use FIPS validated cryptographic modules. 1.2.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. Additional resources Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase) Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (Distributed tracing platform (Tempo) documentation) 1.2.2.2. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.3. Release notes for Red Hat OpenShift distributed tracing platform 3.4 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.3.1. CVEs This release fixes the following CVEs: CVE-2024-21536 CVE-2024-43796 CVE-2024-43799 CVE-2024-43800 CVE-2024-45296 CVE-2024-45590 CVE-2024-45811 CVE-2024-45812 CVE-2024-47068 Cross-site Scripting (XSS) in serialize-javascript 1.3.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) 3.4 is provided through the Tempo Operator 0.14.1 . Note The Red Hat OpenShift distributed tracing platform (Tempo) 3.4 is based on the open source Grafana Tempo 2.6.1. 1.3.2.1. New features and enhancements This update introduces the following enhancements: The monitor tab in the Jaeger UI for TempoStack instances uses a new default metrics namespace: traces.span.metrics . Before this update, the Jaeger UI used an empty namespace. The new traces.span.metrics namespace default is also used by the OpenTelemetry Collector 0.113.0. You can set the empty value for the metrics namespace by using the following field in the TempoStack custom resouce: spec.template.queryFrontend.monitorTab.redMetricsNamespace: "" . Warning This is a breaking change. If you are using both the Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry, you must upgrade to the Red Hat build of OpenTelemetry 3.4 before upgrading to the Red Hat OpenShift distributed tracing platform (Tempo) 3.4. New and optional spec.timeout field in the TempoStack and TempoMonolithic custom resource definitions for configuring one timeout value for all components. The timeout value is set to 30 seconds, 30s , by default. Warning This is a breaking change. 1.3.2.2. Bug fixes This update introduces the following bug fixes: Before this update, the distributed tracing platform (Tempo) failed on the IBM Z ( s390x ) architecture. With this update, the distributed tracing platform (Tempo) is available for the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Before this update, the distributed tracing platform (Tempo) failed on clusters with non-private networks. With this update, you can deploy the distributed tracing platform (Tempo) on clusters with non-private networks. ( TRACING-4507 ) Before this update, the Jaeger UI might fail due to reaching a trace quantity limit, resulting in the 504 Gateway Timeout error in the tempo-query logs. After this update, the issue is resolved by introducing two optional fields in the tempostack or tempomonolithic custom resource: New spec.timeout field for configuring the timeout. New spec.template.queryFrontend.jaegerQuery.findTracesConcurrentRequests field for improving the query performance of the Jaeger UI. Tip One querier can handle up to 20 concurrent queries by default. Increasing the number of concurrent queries further is achieved by scaling up the querier instances. 1.3.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is provided through the Red Hat OpenShift distributed tracing platform Operator 1.62.0 . Note The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is based on the open source Jaeger release 1.62.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.3.3.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.3.3.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.4, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see Migrating in the Red Hat build of OpenTelemetry documentation, Installing in the Red Hat build of OpenTelemetry documentation, and Installing in the distributed tracing platform (Tempo) documentation. Additional resources Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase) Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (Distributed tracing platform (Tempo) documentation) 1.3.3.3. Bug fixes This update introduces the following bug fix: Before this update, the Jaeger UI could fail with the 502 - Bad Gateway Timeout error. After this update, you can configure timeout in ingress annotations. ( TRACING-4238 ) 1.3.3.4. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.4. Release notes for Red Hat OpenShift distributed tracing platform 3.3.1 The Red Hat OpenShift distributed tracing platform 3.3.1 is a maintenance release with no changes because the Red Hat OpenShift distributed tracing platform is bundled with the Red Hat build of OpenTelemetry that is released with a bug fix. This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.4.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. The Red Hat OpenShift distributed tracing platform (Tempo) 3.3.1 is based on the open source Grafana Tempo 2.5.0. 1.4.1.1. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.4.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3.1 is based on the open source Jaeger release 1.57.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.4.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.4.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. 1.4.2.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.5. Release notes for Red Hat OpenShift distributed tracing platform 3.3 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.5.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. The Red Hat OpenShift distributed tracing platform (Tempo) 3.3 is based on the open source Grafana Tempo 2.5.0. 1.5.1.1. New features and enhancements This update introduces the following enhancements: Support for securing the Jaeger UI and Jaeger APIs with the OpenShift OAuth Proxy. ( TRACING-4108 ) Support for using the service serving certificates, which are generated by OpenShift Container Platform, on ingestion APIs when multitenancy is disabled. ( TRACING-3954 ) Support for ingesting by using the OTLP/HTTP protocol when multitenancy is enabled. ( TRACING-4171 ) Support for the AWS S3 Secure Token authentication. ( TRACING-4176 ) Support for automatically reloading certificates. ( TRACING-4185 ) Support for configuring the duration for which service names are available for querying. ( TRACING-4214 ) 1.5.1.2. Bug fixes This update introduces the following bug fixes: Before this update, storage certificate names did not support dots. With this update, storage certificate name can contain dots. ( TRACING-4348 ) Before this update, some users had to select a certificate when accessing the gateway route. With this update, there is no prompt to select a certificate. ( TRACING-4431 ) Before this update, the gateway component was not scalable. With this update, the gateway component is scalable. ( TRACING-4497 ) Before this update the Jaeger UI might fail with the 504 Gateway Time-out error when accessed via a route. With this update, users can specify route annotations for increasing timeout, such as haproxy.router.openshift.io/timeout: 3m , when querying large data sets. ( TRACING-4511 ) 1.5.1.3. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.5.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3 is based on the open source Jaeger release 1.57.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.5.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.5.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.3, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. 1.5.2.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.6. Release notes for Red Hat OpenShift distributed tracing platform 3.2.2 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.6.1. CVEs This release fixes the following CVEs: CVE-2023-2953 CVE-2024-28182 1.6.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.6.2.1. Bug fixes This update introduces the following bug fix: Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new openshift.io/internal-registry-pull-secret-ref annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. ( TRACING-4434 ) 1.6.2.2. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.6.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.6.3.1. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.7. Release notes for Red Hat OpenShift distributed tracing platform 3.2.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.7.1. CVEs This release fixes CVE-2024-25062 . 1.7.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.7.2.1. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.7.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.7.3.1. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.8. Release notes for Red Hat OpenShift distributed tracing platform 3.2 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.8.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.8.1.1. Technology Preview features This update introduces the following Technology Preview feature: Support for the Tempo monolithic deployment. Important The Tempo monolithic deployment is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.8.1.2. New features and enhancements This update introduces the following enhancements: Red Hat OpenShift distributed tracing platform (Tempo) 3.2 is based on the open source Grafana Tempo 2.4.1. Allowing the overriding of resources per component. 1.8.1.3. Bug fixes This update introduces the following bug fixes: Before this update, the Jaeger UI only displayed services that sent traces in the 15 minutes. With this update, the availability of the service and operation names can be configured by using the following field: spec.template.queryFrontend.jaegerQuery.servicesQueryDuration . ( TRACING-3139 ) Before this update, the query-frontend pod might get stopped when out-of-memory (OOM) as a result of searching a large trace. With this update, resource limits can be set to prevent this issue. ( TRACING-4009 ) 1.8.1.4. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.8.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.8.2.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.2 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.8.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.2, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Tempo Operator and the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. Users must adopt the OpenTelemetry and Tempo distributed tracing stack because it is the stack to be enhanced going forward. In the Red Hat OpenShift distributed tracing platform 3.2, the Jaeger agent is deprecated and planned to be removed in the following release. Red Hat will provide bug fixes and support for the Jaeger agent during the current release lifecycle, but the Jaeger agent will no longer receive enhancements and will be removed. The OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry is the preferred Operator for injecting the trace collector agent. 1.8.2.3. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Red Hat OpenShift distributed tracing platform (Jaeger) 3.2 is based on the open source Jaeger release 1.57.0. 1.8.2.4. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.9. Release notes for Red Hat OpenShift distributed tracing platform 3.1.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.9.1. CVEs This release fixes CVE-2023-39326 . 1.9.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.9.2.1. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.9.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.9.3.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.1.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.9.3.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.1.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.1.1, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.9.3.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.10. Release notes for Red Hat OpenShift distributed tracing platform 3.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.10.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.10.1.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Red Hat OpenShift distributed tracing platform (Tempo) 3.1 is based on the open source Grafana Tempo 2.3.1. Support for cluster-wide proxy environments. Support for TraceQL to Gateway component. 1.10.1.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Before this update, when a TempoStack instance was created with the monitorTab enabled in OpenShift Container Platform 4.15, the required tempo-redmetrics-cluster-monitoring-view ClusterRoleBinding was not created. This update resolves the issue by fixing the Operator RBAC for the monitor tab when the Operator is deployed in an arbitrary namespace. ( TRACING-3786 ) Before this update, when a TempoStack instance was created on an OpenShift Container Platform cluster with only an IPv6 networking stack, the compactor and ingestor pods ran in the CrashLoopBackOff state, resulting in multiple errors. This update provides support for IPv6 clusters.( TRACING-3226 ) 1.10.1.3. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.10.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.10.2.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.10.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.1, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.10.2.3. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Red Hat OpenShift distributed tracing platform (Jaeger) 3.1 is based on the open source Jaeger release 1.53.0. 1.10.2.4. Bug fixes This update introduces the following bug fix for the distributed tracing platform (Jaeger): Before this update, the connection target URL for the jaeger-agent container in the jager-query pod was overwritten with another namespace URL in OpenShift Container Platform 4.13. This was caused by a bug in the sidecar injection code in the jaeger-operator , causing nondeterministic jaeger-agent injection. With this update, the Operator prioritizes the Jaeger instance from the same namespace as the target deployment. ( TRACING-3722 ) 1.10.2.5. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.11. Release notes for Red Hat OpenShift distributed tracing platform 3.0 1.11.1. Component versions in the Red Hat OpenShift distributed tracing platform 3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.51.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.3.0 1.11.2. Red Hat OpenShift distributed tracing platform (Jaeger) 1.11.2.1. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.0, Jaeger and support for Elasticsearch are deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.0, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.11.2.2. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Support for the ARM architecture. Support for cluster-wide proxy environments. 1.11.2.3. Bug fixes This update introduces the following bug fix for the distributed tracing platform (Jaeger): Before this update, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator used other images than relatedImages . This caused the ImagePullBackOff error in disconnected network environments when launching the jaeger pod because the oc adm catalog mirror command mirrors images specified in relatedImages . This update provides support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3546 ) 1.11.2.4. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.11.3. Red Hat OpenShift distributed tracing platform (Tempo) 1.11.3.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Support for the ARM architecture. Support for span request count, duration, and error count (RED) metrics. The metrics can be visualized in the Jaeger console deployed as part of Tempo or in the web console in the Observe menu. 1.11.3.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Before this update, the TempoStack CRD was not accepting custom CA certificate despite the option to choose CA certificates. This update fixes support for the custom TLS CA option for connecting to object storage. ( TRACING-3462 ) Before this update, when mirroring the Red Hat OpenShift distributed tracing platform Operator images to a mirror registry for use in a disconnected cluster, the related Operator images for tempo , tempo-gateway , opa-openshift , and tempo-query were not mirrored. This update fixes support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3523 ) Before this update, the query frontend service of the Red Hat OpenShift distributed tracing platform was using internal mTLS when gateway was not deployed. This caused endpoint failure errors. This update fixes mTLS when Gateway is not deployed. ( TRACING-3510 ) 1.11.3.3. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.12. Release notes for Red Hat OpenShift distributed tracing platform 2.9.2 1.12.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.2 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.12.2. CVEs This release fixes CVE-2023-46234 . 1.12.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.12.3.1. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.12.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.12.4.1. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.13. Release notes for Red Hat OpenShift distributed tracing platform 2.9.1 1.13.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.13.2. CVEs This release fixes CVE-2023-44487 . 1.13.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.13.3.1. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.13.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.13.4.1. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.14. Release notes for Red Hat OpenShift distributed tracing platform 2.9 1.14.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.14.2. Red Hat OpenShift distributed tracing platform (Jaeger) 1.14.2.1. Bug fixes Before this update, connection was refused due to a missing gRPC port on the jaeger-query deployment. This issue resulted in transport: Error while dialing: dial tcp :16685: connect: connection refused error message. With this update, the Jaeger Query gRPC port (16685) is successfully exposed on the Jaeger Query service. ( TRACING-3322 ) Before this update, the wrong port was exposed for jaeger-production-query , resulting in refused connection. With this update, the issue is fixed by exposing the Jaeger Query gRPC port (16685) on the Jaeger Query deployment. ( TRACING-2968 ) Before this update, when deploying Service Mesh on single-node OpenShift clusters in disconnected environments, the Jaeger pod frequently went into the Pending state. With this update, the issue is fixed. ( TRACING-3312 ) Before this update, the Jaeger Operator pod restarted with the default memory value due to the reason: OOMKilled error message. With this update, this issue is fixed by removing the resource limits. ( TRACING-3173 ) 1.14.2.2. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.14.3. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.14.3.1. New features and enhancements This release introduces the following enhancements for the distributed tracing platform (Tempo): Support the operator maturity Level IV, Deep Insights, which enables upgrading, monitoring, and alerting of the TempoStack instances and the Tempo Operator. Add Ingress and Route configuration for the Gateway. Support the managed and unmanaged states in the TempoStack custom resource. Expose the following additional ingestion protocols in the Distributor service: Jaeger Thrift binary, Jaeger Thrift compact, Jaeger gRPC, and Zipkin. When the Gateway is enabled, only the OpenTelemetry protocol (OTLP) gRPC is enabled. Expose the Jaeger Query gRPC endpoint on the Query Frontend service. Support multitenancy without Gateway authentication and authorization. 1.14.3.2. Bug fixes Before this update, the Tempo Operator was not compatible with disconnected environments. With this update, the Tempo Operator supports disconnected environments. ( TRACING-3145 ) Before this update, the Tempo Operator with TLS failed to start on OpenShift Container Platform. With this update, the mTLS communication is enabled between Tempo components, the Operand starts successfully, and the Jaeger UI is accessible. ( TRACING-3091 ) Before this update, the resource limits from the Tempo Operator caused error messages such as reason: OOMKilled . With this update, the resource limits for the Tempo Operator are removed to avoid such errors. ( TRACING-3204 ) 1.14.3.3. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.15. Release notes for Red Hat OpenShift distributed tracing platform 2.8 1.15.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.8 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.42 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 0.1.0 1.15.2. Technology Preview features This release introduces support for the Red Hat OpenShift distributed tracing platform (Tempo) as a Technology Preview feature for Red Hat OpenShift distributed tracing platform. Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The feature uses version 0.1.0 of the Red Hat OpenShift distributed tracing platform (Tempo) and version 2.0.1 of the upstream distributed tracing platform (Tempo) components. You can use the distributed tracing platform (Tempo) to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch. Most users who use the distributed tracing platform (Tempo) instead of Jaeger will not notice any difference in functionality because the distributed tracing platform (Tempo) supports the same ingestion and query protocols as Jaeger and uses the same user interface. If you enable this Technology Preview feature, note the following limitations of the current implementation: The distributed tracing platform (Tempo) currently does not support disconnected installations. ( TRACING-3145 ) When you use the Jaeger user interface (UI) with the distributed tracing platform (Tempo), the Jaeger UI lists only services that have sent traces within the last 15 minutes. For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. ( TRACING-3139 ) Expanded support for the Tempo Operator is planned for future releases of the Red Hat OpenShift distributed tracing platform. Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters. For more information about the Tempo Operator, see the Tempo community documentation . 1.15.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.16. Release notes for Red Hat OpenShift distributed tracing platform 2.7 1.16.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.7 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.39 1.16.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.17. Release notes for Red Hat OpenShift distributed tracing platform 2.6 1.17.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.6 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.38 1.17.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.18. Release notes for Red Hat OpenShift distributed tracing platform 2.5 1.18.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.5 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.36 1.18.2. New features and enhancements This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. The Operator now automatically enables the OTLP ports: Port 4317 for the OTLP gRPC protocol. Port 4318 for the OTLP HTTP protocol. This release also adds support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator. 1.18.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.19. Release notes for Red Hat OpenShift distributed tracing platform 2.4 1.19.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.4 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.34.1 1.19.2. New features and enhancements This release adds support for auto-provisioning certificates using the OpenShift Elasticsearch Operator. Self-provisioning by using the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to call the OpenShift Elasticsearch Operator during installation. + Important When upgrading to the Red Hat OpenShift distributed tracing platform 2.4, the Operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period. 1.19.3. Technology Preview features Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform (Jaeger) to use the certificate is a Technology Preview for this release. 1.19.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.20. Release notes for Red Hat OpenShift distributed tracing platform 2.3 1.20.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.2 1.20.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.1 1.20.3. New features and enhancements With this release, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator is now installed to the openshift-distributed-tracing namespace by default. Before this update, the default installation had been in the openshift-operators namespace. 1.20.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.21. Release notes for Red Hat OpenShift distributed tracing platform 2.2 1.21.1. Technology Preview features The unsupported OpenTelemetry Collector components included in the 2.1 release are removed. 1.21.2. Bug fixes This release of the Red Hat OpenShift distributed tracing platform addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.22. Release notes for Red Hat OpenShift distributed tracing platform 2.1 1.22.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.29.1 1.22.2. Technology Preview features This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.22.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.23. Release notes for Red Hat OpenShift distributed tracing platform 2.0 1.23.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.28.0 1.23.2. New features and enhancements This release introduces the following new features and enhancements: Rebrands Red Hat OpenShift Jaeger as the Red Hat OpenShift distributed tracing platform. Updates Red Hat OpenShift distributed tracing platform (Jaeger) Operator to Jaeger 1.28. Going forward, the Red Hat OpenShift distributed tracing platform will only support the stable Operator channel. Channels for individual releases are no longer supported. Adds support for OpenTelemetry protocol (OTLP) to the Query service. Introduces a new distributed tracing icon that appears in the OperatorHub. Includes rolling updates to the documentation to support the name change and new features. 1.23.3. Technology Preview features This release adds the Red Hat build of OpenTelemetry as a Technology Preview , which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat OpenShift distributed tracing platform. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.23.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.24. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.25. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | [
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/distributed_tracing/distr-tracing-rn |
11.5. Column Options | 11.5. Column Options You can use the following options when specifying columns in the creation of a table or view. Any others properties defined will be considered as extension metadata. Property Data Type or Allowed Values Description UUID string A unique identifier for the column NAMEINSOURCE string If this is a column name on the FOREIGN table, this value represents name of the column in source database, if omitted the column name is used when querying for data against the source CASE_SENSITIVE 'TRUE'|'FALSE' SELECTABLE 'TRUE'|'FALSE' TRUE when this column is available for selection from the user query UPDATABLE 'TRUE'|'FALSE' Defines if the column is updatable. Defaults to true if the view/table is updatable. SIGNED 'TRUE'|'FALSE' CURRENCY 'TRUE'|'FALSE' FIXED_LENGTH 'TRUE'|'FALSE' SEARCHABLE 'SEARCHABLE'|'UNSEARCHABLE'|'LIKE_ONLY'|'ALL_EXCEPT_LIKE' column searchability, usually dictated by the data type MIN_VALUE MAX_VALUE CHAR_OCTET_LENGTH integer ANNOTATION string NATIVE_TYPE string RADIX integer NULL_VALUE_COUNT long costing information. Number of NULLS in this column DISTINCT_VALUES long costing information. Number of distinct values in this column | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/column_options |
19.5. Clustering and High Availability | 19.5. Clustering and High Availability High Availability Add-On Administration The High Availability Add-On Administration guide provides information on how to configure and administer the High Availability Add-On in Red Hat Enterprise Linux 7. High Availability Add-On Overview The High Availability Add-On Overview document provides an overview of the High Availability Add-On for Red Hat Enterprise Linux 7. High Availability Add-On Reference High Availability Add-On Reference is a reference guide to the High Availability Add-On for Red Hat Enterprise Linux 7. Load Balancer Administration Load Balancer Administration is a guide to configuring and administering high-performance load balancing in Red Hat Enterprise Linux 7. DM Multipath The DM Multipath book guides users through configuring and administering the Device-Mapper Multipath feature for Red Hat Enterprise Linux 7. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-Red_Hat_Enterprise_Linux-7.0_Release_Notes-Documentation-Clustering_and_High_Availability |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 9.1 is distributed with the kernel version 5.14.0-162, which provides support for the following architectures at the minimum required version: AMD and Intel 64-bit architectures (x86-64-v2) The 64-bit ARM architecture (ARMv8.0-A) IBM Power Systems, Little Endian (POWER9) 64-bit IBM Z (z14) Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.1_release_notes/architectures |
Chapter 10. Detecting duplicate messages | Chapter 10. Detecting duplicate messages You can configure the broker to automatically detect and filter duplicate messages. This means that you do not have to implement your own duplicate detection logic. Without duplicate detection, in the event of an unexpected connection failure, a client cannot determine whether a message it sent to the broker was received. In this situation, the client might assume that the broker did not receive the message, and resend it. This results in a duplicate message. For example, suppose that a client sends a message to the broker. If the broker or connection fails before the message is received and processed by the broker, the message never arrives at its address. The client does not receive a response from the broker due to the failure. If the broker or connection fails after the message is received and processed by the broker, the message is routed correctly, but the client still does not receive a response. In addition, using a transaction to determine success does not necessarily help in these cases. If the broker or connection fails while the transaction commit is being processed, the client is still unable to determine whether it successfully sent the message. In these situations, to correct the assumed failure, the client resends the most recent message. The result might be a duplicate message that negatively impacts your system. For example, if you are using the broker in an order-fulfilment system, a duplicate message might mean that a purchase order is processed twice. The following procedures show how to configure duplicate message detection to protect against these types of situations. 10.1. Configuring the duplicate ID cache To enable the broker to detect duplicate messages, producers must provide unique values for the message property _AMQ_DUPL_ID when sending each message. The broker maintains caches of received values of the _AMQ_DUPL_ID property. When a broker receives a new message on an address, it checks the cache for that address to ensure that it has not previously processed a message with the same value for this property. Each address has its own cache. Each cache is circular and fixed in size. This means that new entries replace the oldest ones as cache space demands. The following procedure shows how to globally configure the ID cache used by each address on the broker. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the id-cache-size and persist-id-cache properties and specify values. For example: <configuration> <core> ... <id-cache-size>5000</id-cache-size> <persist-id-cache>false</persist-id-cache> </core> </configuration> id-cache-size Maximum size of the ID cache, specified as the number of individual entries in the cache. The default value is 20,000 entries. In this example, the cache size is set to 5,000 entries. Note When the maximum size of the cache is reached, it is possible for the broker to start processing duplicate messages. For example, suppose that you set the size of the cache to 3000 . If a message arrived more than 3,000 messages before the arrival of a new message with the same value of _AMQ_DUPL_ID , the broker cannot detect the duplicate. This results in both messages being processed by the broker. persist-id-cache When the value of this property is set to true , the broker persists IDs to disk as they are received. The default value is true . In the example above, you disable persistence by setting the value to false . Additional resources To learn how to set the duplicate ID message property using the AMQ Core Protocol JMS client, see Using duplicate message detection in the AMQ Core Protocol JMS client documentation. 10.2. Configuring duplicate detection for cluster connections You can configure cluster connections to insert a duplicate ID header for each message that moves across the cluster. Prerequisites You should have already configured a broker cluster. For more information, see Section 14.2, "Creating a broker cluster" Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, for a given cluster connection, add the use-duplicate-detection property and specify a value. For example: <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <use-duplicate-detection>true</use-duplicate-detection> ... </cluster-connection> ... </cluster-connections> </core> </configuration> use-duplicate-detection When the value of this property is set to true , the cluster connection inserts a duplicate ID header for each message that it handles. | [
"<configuration> <core> <id-cache-size>5000</id-cache-size> <persist-id-cache>false</persist-id-cache> </core> </configuration>",
"<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <use-duplicate-detection>true</use-duplicate-detection> </cluster-connection> </cluster-connections> </core> </configuration>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/assembly-br-detecting-duplicate-messages_configuring |
A.6. Logging Performance Data (using pmlogger) | A.6. Logging Performance Data (using pmlogger) PCP allows you to log performance metric values which can replayed at a later date by creating archived logs of selected metrics on the system through the pmlogger tool. These metric archives may be played back at a later date to give retrospective performance analysis. The pmlogger tool provides flexibility and control over the logged metrics by allowing you to specify which metrics are recorded on the system and at what frequency. By default, the configuration file for pmlogger is stored at /var/lib/pcp/config/pmlogger/config.default ; the configuration file outlines which metrics are logged by the primary logging instance. In order for pmlogger to log metric values on the local machine, a primary logging instance must be started. You can use systemctl to ensure that pmlogger is started as a service when the machine starts. The following example shows an extract of a pmlogger configuration file which enables the recording of GFS2 performance metrics. This extract shows that pmlogger will log the performance metric values for the PCP GFS2 latency metrics every 10 seconds, the top 10 worst glock metric every 30 seconds, the tracepoint data every minute, and it will log the data from the glock , glstats and sbstats metrics every 10 minutes. Note PCP comes with a default set of metrics which it will log on the host when pmlogger is enabled. However, no logging of GFS2 metrics occur with this default configuration. After recording metric data, you have multiple options when it comes to the replaying of PCP log archives on the system. You can export the logs to text files and import them into spreadsheets, or you can replay them in the PCP-GUI application using graphs to visualize the retrospective data alongside live data of the system. One of the tools available in PCP for viewing the log files is pmdumptext . This tool allows the user to parse the selected PCP log archive and export the values into an ASCII table. pmdumptext can be used to dump the entire archive log or only select metric values from the log by specifying individual metrics through the command line. For more information on using pmdumptext , see the pmdumptext (1) man page. | [
"It is safe to make additions from here on # log mandatory on every 5 seconds { gfs2.latency.grant gfs2.latency.queue gfs2.latency.demote gfs2.glocks } log mandatory on every 10 seconds { gfs2.worst_glock } log mandatory on every 30 seconds { gfs2.tracepoints } log mandatory on every 5 minutes { gfs2.glstats gfs2.sbstats } [access] disallow * : all; allow localhost : enquire;"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-loggingperformance |
Part I. Basic System Configuration | Part I. Basic System Configuration This part covers basic system administration tasks such as keyboard configuration, date and time configuration, managing users and groups, and gaining privileges. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/part-basic_system_configuration |
Chapter 6. Managing unused rendered machine configs | Chapter 6. Managing unused rendered machine configs The Machine Config Operator (MCO) does not perform any garbage collection activities. This means that all rendered machine configs remain in the cluster. Each time a user or controller applies a new machine config, the MCO creates new rendered configs for each affected machine config pool. Over time, this can lead to a large number of rendered machine configs, which can make working with machine configs confusing. Having too many rendered machine configs can also contribute to disk space issues and performance issues with etcd. You can remove old, unused rendered machine configs by using the oc adm prune renderedmachineconfigs command with the --confirm flag. With this command, you can remove all unused rendered machine configs or only those in a specific machine config pool. You can also remove a specified number of unused rendered machine configs in order to keep some older machine configs, in case you want to check older configurations. You can use the oc adm prune renderedmachineconfigs command without the --confirm flag to see which rendered machine configs would be removed. Use the list subcommand to display all the rendered machine configs in the cluster or a specific machine config pool. Note The oc adm prune renderedmachineconfigs command deletes only rendered machine configs that are not in use. If a rendered machine configs are in use by a machine config pool, the rendered machine config is not deleted. In this case, the command output specifies the reason that the rendered machine config was not deleted. 6.1. Viewing rendered machine configs You can view a list of rendered machine configs by using the oc adm prune renderedmachineconfigs command with the list subcommand. For example, the command in the following procedure would list all rendered machine configs for the worker machine config pool. Procedure Optional: List the rendered machine configs by using the following command: USD oc adm prune renderedmachineconfigs list --in-use=false --pool-name=worker where: list Displays a list of rendered machine configs in your cluster. --in-use Optional: Specifies whether to display only the used machine configs or all machine configs from the specified pool. If true , the output lists the rendered machine configs that are being used by a machine config pool. If false , the output lists all rendered machine configs in the cluster. The default value is false . --pool-name Optional: Specifies the machine config pool from which to display the machine configs. Example output worker rendered-worker-f38bf61ced3c920cf5a29a200ed43243 -- 2025-01-21 13:45:01 +0000 UTC (Currently in use: false) rendered-worker-fc94397dc7c43808c7014683c208956e-- 2025-01-30 17:20:53 +0000 UTC (Currently in use: false) rendered-worker-708c652868f7597eaa1e2622edc366ef -- 2025-01-31 18:01:16 +0000 UTC (Currently in use: true) List the rendered machine configs that you can remove automatically by running the following command. Any rendered machine config marked with the as it's currently in use message in the command output cannot be removed. USD oc adm prune renderedmachineconfigs --pool-name=worker The command runs in dry-run mode, and no machine configs are removed. where: --pool-name Optional: Displays the machine configs in the specified machine config pool. Example output Dry run enabled - no modifications will be made. Add --confirm to remove rendered machine configs. dry-run deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 dry-run deleting MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip dry-run deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use 6.2. Removing unused rendered machine configs You can remove unused rendered machine configs by using the oc adm prune renderedmachineconfigs command with the --confirm command. If any rendered machine config is not deleted, the command output indicates which was not deleted and lists the reason for skipping the deletion. Procedure Optional: List the rendered machine configs that you can remove automatically by running the following command. Any rendered machine config marked with the as it's currently in use message in the command output cannot be removed. USD oc adm prune renderedmachineconfigs --pool-name=worker Example output Dry run enabled - no modifications will be made. Add --confirm to remove rendered machine configs. dry-run deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 dry-run deleting MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip dry-run deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use where: pool-name Optional: Specifies the machine config pool where you want to delete the machine configs from. Remove the unused rendered machine configs by running the following command. The command in the following procedure would delete the two oldest unused rendered machine configs in the worker machine config pool. USD oc adm prune renderedmachineconfigs --pool-name=worker --count=2 --confirm where: --count Optional: Specifies the maximum number of unused rendered machine configs you want to delete, starting with the oldest. --confirm Indicates that pruning should occur, instead of performing a dry-run. --pool-name Optional: Specifies the machine config pool from which you want to delete the machine. If not specified, all the pools are evaluated. Example output deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 deleting rendered MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use | [
"oc adm prune renderedmachineconfigs list --in-use=false --pool-name=worker",
"worker rendered-worker-f38bf61ced3c920cf5a29a200ed43243 -- 2025-01-21 13:45:01 +0000 UTC (Currently in use: false) rendered-worker-fc94397dc7c43808c7014683c208956e-- 2025-01-30 17:20:53 +0000 UTC (Currently in use: false) rendered-worker-708c652868f7597eaa1e2622edc366ef -- 2025-01-31 18:01:16 +0000 UTC (Currently in use: true)",
"oc adm prune renderedmachineconfigs --pool-name=worker",
"Dry run enabled - no modifications will be made. Add --confirm to remove rendered machine configs. dry-run deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 dry-run deleting MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip dry-run deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use",
"oc adm prune renderedmachineconfigs --pool-name=worker",
"Dry run enabled - no modifications will be made. Add --confirm to remove rendered machine configs. dry-run deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 dry-run deleting MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip dry-run deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use",
"oc adm prune renderedmachineconfigs --pool-name=worker --count=2 --confirm",
"deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 deleting rendered MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_configuration/machine-configs-garbage-collection |
Backup and restore | Backup and restore Red Hat build of MicroShift 4.18 Backup and restore the Red Hat build of MicroShift database Red Hat OpenShift Documentation Team | [
"sudo systemctl stop microshift",
"sudo crictl ps -a",
"sudo systemctl stop kubepods.slice",
"sudo microshift backup /var/lib/microshift-backups/<my_manual_backup>",
"??? I1017 07:38:16.770506 5900 data_manager.go:92] \"Copying data to backup directory\" storage=\"/var/lib/microshift-backups\" name=\"test\" data=\"/var/lib/microshift\" ??? I1017 07:38:16.770713 5900 data_manager.go:227] \"Starting copy\" cmd=\"/bin/cp --verbose --recursive --preserve --reflink=auto /var/lib/microshift /var/lib/microshift-backups/test\" ??? I1017 07:38:16.776162 5900 data_manager.go:241] \"Finished copy\" cmd=\"/bin/cp --verbose --recursive --preserve --reflink=auto /var/lib/microshift /var/lib/microshift-backups/test\" ??? I1017 07:38:16.776256 5900 data_manager.go:125] \"Copied data to backup directory\" backup=\"/var/lib/microshift-backups/test\" data=\"/var/lib/microshift\"",
"sudo microshift backup /mnt/<other_backups_location>/<another_manual_backup>",
"sudo microshift restore /var/lib/microshift-backups/<my_manual_backup>",
"??? I1017 07:39:52.055165 6007 data_manager.go:131] \"Copying backup to data directory\" storage=\"/var/lib/microshift-backups\" name=\"test\" data=\"/var/lib/microshift\" ??? I1017 07:39:52.055243 6007 data_manager.go:154] \"Renaming existing data dir\" data=\"/var/lib/microshift\" renamedTo=\"/var/lib/microshift.saved\" ??? I1017 07:39:52.055326 6007 data_manager.go:227] \"Starting copy\" cmd=\"/bin/cp --verbose --recursive --preserve --reflink=auto /var/lib/microshift-backups/test /var/lib/microshift\" ??? I1017 07:39:52.061363 6007 data_manager.go:241] \"Finished copy\" cmd=\"/bin/cp --verbose --recursive --preserve --reflink=auto /var/lib/microshift-backups/test /var/lib/microshift\" ??? I1017 07:39:52.061404 6007 data_manager.go:175] \"Removing temporary data directory\" path=\"/var/lib/microshift.saved\" ??? I1017 07:39:52.063745 6007 data_manager.go:180] \"Copied backup to data directory\" name=\"test\" data=\"/var/lib/microshift\"",
"sudo microshift restore /<mnt>/<other_backups_location>/<another_manual_backup>",
"sudo microshift backup --auto-recovery <path_of_directory> 1",
"??? I1104 09:18:52.100725 8906 system.go:58] \"OSTree deployments\" deployments=[{\"id\":\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\",\"booted\":true,\"staged\":false,\"pinned\":false},{\"id\":\"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\",\"booted\":false,\"staged\":false,\"pinned\":false}] ??? I1104 09:18:52.100895 8906 data_manager.go:83] \"Copying data to backup directory\" storage=\"/var/lib/microshift-auto-recovery\" name=\"20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\" data=\"/var/lib/microshift\" ??? I1104 09:18:52.102296 8906 disk_space.go:33] Calculated size of \"/var/lib/microshift\": 261M - increasing by 10% for safety: 287M ??? I1104 09:18:52.102321 8906 disk_space.go:44] Calculated available disk space for \"/var/lib/microshift-auto-recovery\": 1658M ??? I1104 09:18:52.105700 8906 atomic_dir_copy.go:66] \"Made an intermediate copy\" cmd=\"/bin/cp --verbose --recursive --preserve --reflink=auto /var/lib/microshift /var/lib/microshift-auto-recovery/20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1.tmp.99142\" ??? I1104 09:18:52.105732 8906 atomic_dir_copy.go:115] \"Renamed to final destination\" src=\"/var/lib/microshift-auto-recovery/20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1.tmp.99142\" dest=\"/var/lib/microshift-auto-recovery/20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\" ??? I1104 09:18:52.105749 8906 data_manager.go:120] \"Copied data to backup directory\" backup=\"/var/lib/microshift-auto-recovery/20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\" data=\"/var/lib/microshift\" /var/lib/microshift-auto-recovery/20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1",
"sudo ls -la <path_of_directory> 1",
"sudo microshift restore --auto-recovery <path_of_directory> 1",
"??? I1104 09:19:28.617225 8950 state.go:80] \"Read state from the disk\" state={\"LastBackup\":\"20241022101528_default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\"} ??? I1104 09:19:28.617323 8950 storage.go:78] \"Auto-recovery backup storage read and parsed\" dirs=[\"20241022101255_default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\",\"20241022101520_default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\",\"20241022101528_default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\",\"20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\",\"restored\"] backups=[{\"CreationTime\":\"2024-10-22T10:12:55Z\",\"Version\":\"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\"},{\"CreationTime\":\"2024-10-22T10:15:20Z\",\"Version\":\"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\"},{\"CreationTime\":\"2024-10-22T10:15:28Z\",\"Version\":\"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\"},{\"CreationTime\":\"2024-11-04T09:18:52Z\",\"Version\":\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\"}] ??? I1104 09:19:28.617350 8950 storage.go:40] \"Filtered list of backups - removed previously restored backup\" removed=\"20241022101528_default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\" newList=[{\"CreationTime\":\"2024-10-22T10:12:55Z\",\"Version\":\"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\"},{\"CreationTime\":\"2024-10-22T10:15:20Z\",\"Version\":\"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\"},{\"CreationTime\":\"2024-11-04T09:18:52Z\",\"Version\":\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\"}] ??? I1104 09:19:28.633237 8950 system.go:58] \"OSTree deployments\" deployments=[{\"id\":\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\",\"booted\":true,\"staged\":false,\"pinned\":false},{\"id\":\"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\",\"booted\":false,\"staged\":false,\"pinned\":false}] ??? I1104 09:19:28.633258 8950 storage.go:49] \"Filtered list of backups by version\" version=\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\" newList=[{\"CreationTime\":\"2024-11-04T09:18:52Z\",\"Version\":\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\"}] ??? I1104 09:19:28.633268 8950 restore.go:170] \"Potential backups\" bz=[{\"CreationTime\":\"2024-11-04T09:18:52Z\",\"Version\":\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\"}] ??? I1104 09:19:28.633277 8950 restore.go:173] \"Candidate backup for restore\" b={\"CreationTime\":\"2024-11-04T09:18:52Z\",\"Version\":\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\"} ??? I1104 09:19:28.634007 8950 disk_space.go:33] Calculated size of \"/var/lib/microshift-auto-recovery/20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\": 261M - increasing by 10% for safety: 287M ??? I1104 09:19:28.634096 8950 disk_space.go:44] Calculated available disk space for \"/var/lib\": 1658M ??? I1104 09:19:28.634507 8950 disk_space.go:33] Calculated size of \"/var/lib/microshift\": 261M - increasing by 10% for safety: 287M ??? I1104 09:19:28.634522 8950 disk_space.go:44] Calculated available disk space for \"/var/lib/microshift-auto-recovery\": 1658M ??? I1104 09:19:28.649719 8950 system.go:58] \"OSTree deployments\" deployments=[{\"id\":\"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\",\"booted\":true,\"staged\":false,\"pinned\":false},{\"id\":\"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\",\"booted\":false,\"staged\":false,\"pinned\":false}] ??? I1104 09:19:28.653880 8950 atomic_dir_copy.go:66] \"Made an intermediate copy\" cmd=\"/bin/cp --verbose --recursive --preserve --reflink=auto /var/lib/microshift /var/lib/microshift-auto-recovery/failed/20241104091928_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1.tmp.22742\" ??? I1104 09:19:28.657362 8950 atomic_dir_copy.go:66] \"Made an intermediate copy\" cmd=\"/bin/cp --verbose --recursive --preserve --reflink=auto /var/lib/microshift-auto-recovery/20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1 /var/lib/microshift.tmp.482\" ??? I1104 09:19:28.657385 8950 state.go:40] \"Saving intermediate state\" state=\"{\\\"LastBackup\\\":\\\"20241104091852_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\\\"}\" path=\"/var/lib/microshift-auto-recovery/state.json.tmp.41544\" ??? I1104 09:19:28.662438 8950 atomic_dir_copy.go:115] \"Renamed to final destination\" src=\"/var/lib/microshift.tmp.482\" dest=\"/var/lib/microshift\" ??? I1104 09:19:28.662451 8950 state.go:46] \"Moving state file to final path\" intermediatePath=\"/var/lib/microshift-auto-recovery/state.json.tmp.41544\" finalPath=\"/var/lib/microshift-auto-recovery/state.json\" ??? I1104 09:19:28.662521 8950 atomic_dir_copy.go:115] \"Renamed to final destination\" src=\"/var/lib/microshift-auto-recovery/failed/20241104091928_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1.tmp.22742\" dest=\"/var/lib/microshift-auto-recovery/failed/20241104091928_default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1\" ??? I1104 09:19:28.662969 8950 atomic_dir_copy.go:115] \"Renamed to final destination\" src=\"/var/lib/microshift-auto-recovery/20241022101528_default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\" dest=\"/var/lib/microshift-auto-recovery/restored/20241022101528_default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0\" ??? I1104 09:19:28.662983 8950 restore.go:141] \"Auto-recovery restore completed\".",
"sudo systemctl restart microshift",
"oc get pods -A",
"NAMESPACE NAME READY STATUS RESTARTS AGE default i-06166fbb376f14a8bus-west-2computeinternal-debug-qtwcr 1/1 Running 0 46m kube-system csi-snapshot-controller-5c6586d546-lprv4 1/1 Running 0 51m openshift-dns dns-default-45jl7 2/2 Running 0 50m openshift-dns node-resolver-7wmzf 1/1 Running 0 51m openshift-ingress router-default-78b86fbf9d-qvj9s 1/1 Running 0 51m openshift-ovn-kubernetes ovnkube-master-5rfhh 4/4 Running 0 51m openshift-ovn-kubernetes ovnkube-node-gcnt6 1/1 Running 0 51m openshift-service-ca service-ca-bf5b7c9f8-pn6rk 1/1 Running 0 51m openshift-storage topolvm-controller-549f7fbdd5-7vrmv 5/5 Running 0 51m openshift-storage topolvm-node-rht2m 3/3 Running 0 50m",
"sudo mkdir -p /usr/lib/systemd/system/microshift.service.d",
"sudo tee /usr/lib/systemd/system/microshift.service.d/10-auto-recovery.conf > /dev/null <<'EOF' [Unit] OnFailure=microshift-auto-recovery.service StartLimitIntervalSec=25s 1 EOF",
"sudo tee /usr/lib/systemd/system/microshift-auto-recovery.service > /dev/null <<'EOF' [Unit] Description=MicroShift auto-recovery [Service] Type=oneshot ExecStart=/usr/bin/microshift-auto-recovery [Install] WantedBy=multi-user.target EOF",
"sudo tee /usr/bin/microshift-auto-recovery > /dev/null <<'EOF' #!/usr/bin/env bash set -xeuo pipefail If greenboot uses a non-default file for clearing boot_counter, use boot_success instead. if grep -q \"/boot/grubenv\" /usr/libexec/greenboot/greenboot-grub2-set-success; then if grub2-editenv - list | grep -q ^boot_success=0; then echo \"Greenboot didn't decide the system is healthy after staging new deployment.\" echo \"Quitting to not interfere with the process\" exit 0 fi else if grub2-editenv - list | grep -q ^boot_counter=; then echo \"Greenboot didn't decide the system is healthy after staging a new deployment.\" echo \"Quitting to not interfere with the process\" exit 0 fi fi /usr/bin/microshift restore --auto-recovery /var/lib/microshift-auto-recovery /usr/bin/systemctl reset-failed microshift /usr/bin/systemctl start microshift echo \"DONE\" EOF",
"sudo chmod +x /usr/bin/microshift-auto-recovery",
"sudo systemctl daemon-reload",
"[[customizations.directories]] path = \"/etc/systemd/system/microshift.service.d\" [[customizations.directories]] path = \"/etc/bin\" [[customizations.files]] path = \"/etc/systemd/system/microshift.service.d/10-auto-recovery.conf\" data = \"\"\" [Unit] OnFailure=microshift-auto-recovery.service \"\"\" [[customizations.files]] path = \"/etc/systemd/system/microshift-auto-recovery.service\" data = \"\"\" [Unit] Description=MicroShift auto-recovery [Service] Type=oneshot ExecStart=/etc/bin/microshift-auto-recovery [Install] WantedBy=multi-user.target \"\"\" [[customizations.files]] path = \"/etc/bin/microshift-auto-recovery\" mode = \"0755\" data = \"\"\" #!/usr/bin/env bash set -xeuo pipefail If greenboot uses a non-default file for clearing boot_counter, use boot_success instead. if grep -q \"/boot/grubenv\" /usr/libexec/greenboot/greenboot-grub2-set-success; then if grub2-editenv - list | grep -q ^boot_success=0; then echo \"Greenboot didn't decide the system is healthy after staging a new deployment.\" echo \"Quitting to not interfere with the process\" exit 0 fi else if grub2-editenv - list | grep -q ^boot_counter=; then echo \"Greenboot didn't decide the system is healthy after staging a new deployment.\" echo \"Quitting to not interfere with the process\" exit 0 fi fi /usr/bin/microshift restore --auto-recovery /var/lib/microshift-auto-recovery /usr/bin/systemctl reset-failed microshift /usr/bin/systemctl start microshift echo \"DONE\" \"\"\"",
"RUN mkdir -p /usr/lib/systemd/system/microshift.service.d COPY ./auto-rec/10-auto-recovery.conf /usr/lib/systemd/system/microshift.service.d/10-auto-recovery.conf COPY ./auto-rec/microshift-auto-recovery.service /usr/lib/systemd/system/ COPY ./auto-rec/microshift-auto-recovery /usr/bin/ RUN chmod +x /usr/bin/microshift-auto-recovery",
"PULL_SECRET=~/.pull-secret.json USER_PASSWD=<your_redhat_user_password> IMAGE_NAME=microshift-4.18-bootc sudo podman build --authfile \"USD{PULL_SECRET}\" -t \"USD{IMAGE_NAME}\" --build-arg USER_PASSWD=\"USD{USER_PASSWD}\" -f Containerfile",
"sudo podman images \"USD{IMAGE_NAME}\"",
"REPOSITORY TAG IMAGE ID CREATED SIZE localhost/microshift-4.18-bootc latest 193425283c00 2 minutes ago 2.31 GB"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html-single/backup_and_restore/index |
Chapter 19. PersistentClaimStorageOverride schema reference | Chapter 19. PersistentClaimStorageOverride schema reference Used in: PersistentClaimStorage Property Description class The storage class to use for dynamic volume allocation for this broker. string broker Id of the kafka broker (broker identifier). integer | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-PersistentClaimStorageOverride-reference |
Chapter 5. Uninstalling OpenShift Data Foundation | Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_google_cloud/uninstalling_openshift_data_foundation |
Chapter 14. Pruning objects to reclaim resources | Chapter 14. Pruning objects to reclaim resources Over time, API objects created in OpenShift Container Platform can accumulate in the cluster's etcd data store through normal user operations, such as when building and deploying applications. Cluster administrators can periodically prune older versions of objects from the cluster that are no longer required. For example, by pruning images you can delete older images and layers that are no longer in use, but are still taking up disk space. 14.1. Basic pruning operations The CLI groups prune operations under a common parent command: USD oc adm prune <object_type> <options> This specifies: The <object_type> to perform the action on, such as groups , builds , deployments , or images . The <options> supported to prune that object type. 14.2. Pruning groups To prune groups records from an external provider, administrators can run the following command: USD oc adm prune groups \ --sync-config=path/to/sync/config [<options>] Table 14.1. oc adm prune groups flags Options Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --blacklist Path to the group blacklist file. --whitelist Path to the group whitelist file. --sync-config Path to the synchronization configuration file. Procedure To see the groups that the prune command deletes, run the following command: USD oc adm prune groups --sync-config=ldap-sync-config.yaml To perform the prune operation, add the --confirm flag: USD oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm 14.3. Pruning deployment resources You can prune resources associated with deployments that are no longer required by the system, due to age and status. The following command prunes replication controllers associated with DeploymentConfig objects: USD oc adm prune deployments [<options>] Note To also prune replica sets associated with Deployment objects, use the --replica-sets flag. This flag is currently a Technology Preview feature. Table 14.2. oc adm prune deployments flags Option Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --keep-complete=<N> Per the DeploymentConfig object, keep the last N replication controllers that have a status of Complete and replica count of zero. The default is 5 . --keep-failed=<N> Per the DeploymentConfig object, keep the last N replication controllers that have a status of Failed and replica count of zero. The default is 1 . --keep-younger-than=<duration> Do not prune any replication controller that is younger than <duration> relative to the current time. Valid units of measurement include nanoseconds ( ns ), microseconds ( us ), milliseconds ( ms ), seconds ( s ), minutes ( m ), and hours ( h ). The default is 60m . --orphans Prune all replication controllers that no longer have a DeploymentConfig object, has status of Complete or Failed , and has a replica count of zero. --replica-sets=true|false If true , replica sets are included in the pruning process. The default is false . Important This flag is a Technology Preview feature. Procedure To see what a pruning operation would delete, run the following command: USD oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m To actually perform the prune operation, add the --confirm flag: USD oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm 14.4. Pruning builds To prune builds that are no longer required by the system due to age and status, administrators can run the following command: USD oc adm prune builds [<options>] Table 14.3. oc adm prune builds flags Option Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --orphans Prune all builds whose build configuration no longer exists, status is complete, failed, error, or canceled. --keep-complete=<N> Per build configuration, keep the last N builds whose status is complete. The default is 5 . --keep-failed=<N> Per build configuration, keep the last N builds whose status is failed, error, or canceled. The default is 1 . --keep-younger-than=<duration> Do not prune any object that is younger than <duration> relative to the current time. The default is 60m . Procedure To see what a pruning operation would delete, run the following command: USD oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m To actually perform the prune operation, add the --confirm flag: USD oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm Note Developers can enable automatic build pruning by modifying their build configuration. Additional resources Performing advanced builds Pruning builds 14.5. Automatically pruning images Images from the OpenShift image registry that are no longer required by the system due to age, status, or exceed limits are automatically pruned. Cluster administrators can configure the Pruning Custom Resource, or suspend it. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster administrator permissions. Install the oc CLI. Procedure Verify that the object named imagepruners.imageregistry.operator.openshift.io/cluster contains the following spec and status fields: spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: "True" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: "Periodic image pruner has been created." - type: Scheduled status: "True" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: "Image pruner job has been scheduled." - type: Failed staus: "False" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: "Most recent image pruning job succeeded." 1 schedule : CronJob formatted schedule. This is an optional field, default is daily at midnight. 2 suspend : If set to true , the CronJob running pruning is suspended. This is an optional field, default is false . The initial value on new clusters is false . 3 keepTagRevisions : The number of revisions per tag to keep. This is an optional field, default is 3 . The initial value is 3 . 4 keepYoungerThanDuration : Retain images younger than this duration. This is an optional field. If a value is not specified, either keepYoungerThan or the default value 60m (60 minutes) is used. 5 keepYoungerThan : Deprecated. The same as keepYoungerThanDuration , but the duration is specified as an integer in nanoseconds. This is an optional field. When keepYoungerThanDuration is set, this field is ignored. 6 resources : Standard pod resource requests and limits. This is an optional field. 7 affinity : Standard pod affinity. This is an optional field. 8 nodeSelector : Standard pod node selector. This is an optional field. 9 tolerations : Standard pod tolerations. This is an optional field. 10 successfulJobsHistoryLimit : The maximum number of successful jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . 11 failedJobsHistoryLimit : The maximum number of failed jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . 12 observedGeneration : The generation observed by the Operator. 13 conditions : The standard condition objects with the following types: Available : Indicates if the pruning job has been created. Reasons can be Ready or Error. Scheduled : Indicates if the pruning job has been scheduled. Reasons can be Scheduled, Suspended, or Error. Failed : Indicates if the most recent pruning job failed. Important The Image Registry Operator's behavior for managing the pruner is orthogonal to the managementState specified on the Image Registry Operator's ClusterOperator object. If the Image Registry Operator is not in the Managed state, the image pruner can still be configured and managed by the Pruning Custom Resource. However, the managementState of the Image Registry Operator alters the behavior of the deployed image pruner job: Managed : the --prune-registry flag for the image pruner is set to true . Removed : the --prune-registry flag for the image pruner is set to false , meaning it only prunes image metadata in etcd. 14.6. Manually pruning images The pruning custom resource enables automatic image pruning for the images from the OpenShift image registry. However, administrators can manually prune images that are no longer required by the system due to age, status, or exceed limits. There are two methods to manually prune images: Running image pruning as a Job or CronJob on the cluster. Running the oc adm prune images command. Prerequisites To prune images, you must first log in to the CLI as a user with an access token. The user must also have the system:image-pruner cluster role or greater (for example, cluster-admin ). Expose the image registry. Procedure To manually prune images that are no longer required by the system due to age, status, or exceed limits, use one of the following methods: Run image pruning as a Job or CronJob on the cluster by creating a YAML file for the pruner service account, for example: USD oc create -f <filename>.yaml Example output kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: "0 0 * * *" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: "quay.io/openshift/origin-cli:4.1" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner Run the oc adm prune images [<options>] command: USD oc adm prune images [<options>] Pruning images removes data from the integrated registry unless --prune-registry=false is used. Pruning images with the --namespace flag does not remove images, only image streams. Images are non-namespaced resources. Therefore, limiting pruning to a particular namespace makes it impossible to calculate its current usage. By default, the integrated registry caches metadata of blobs to reduce the number of requests to storage, and to increase the request-processing speed. Pruning does not update the integrated registry cache. Images that still contain pruned layers after pruning will be broken because the pruned layers that have metadata in the cache will not be pushed. Therefore, you must redeploy the registry to clear the cache after pruning: USD oc rollout restart deployment/image-registry -n openshift-image-registry If the integrated registry uses a Redis cache, you must clean the database manually. If redeploying the registry after pruning is not an option, then you must permanently disable the cache. oc adm prune images operations require a route for your registry. Registry routes are not created by default. The Prune images CLI configuration options table describes the options you can use with the oc adm prune images <options> command. Table 14.4. Prune images CLI configuration options Option Description --all Include images that were not pushed to the registry, but have been mirrored by pullthrough. This is on by default. To limit the pruning to images that were pushed to the integrated registry, pass --all=false . --certificate-authority The path to a certificate authority file to use when communicating with the OpenShift Container Platform-managed registries. Defaults to the certificate authority data from the current user's configuration file. If provided, a secure connection is initiated. --confirm Indicate that pruning should occur, instead of performing a test-run. This requires a valid route to the integrated container image registry. If this command is run outside of the cluster network, the route must be provided using --registry-url . --force-insecure Use caution with this option. Allow an insecure connection to the container registry that is hosted via HTTP or has an invalid HTTPS certificate. --keep-tag-revisions=<N> For each imagestream, keep up to at most N image revisions per tag (default 3 ). --keep-younger-than=<duration> Do not prune any image that is younger than <duration> relative to the current time. Alternately, do not prune any image that is referenced by any other object that is younger than <duration> relative to the current time (default 60m ). --prune-over-size-limit Prune each image that exceeds the smallest limit defined in the same project. This flag cannot be combined with --keep-tag-revisions nor --keep-younger-than . --registry-url The address to use when contacting the registry. The command attempts to use a cluster-internal URL determined from managed images and image streams. In case it fails (the registry cannot be resolved or reached), an alternative route that works needs to be provided using this flag. The registry hostname can be prefixed by https:// or http:// , which enforces particular connection protocol. --prune-registry In conjunction with the conditions stipulated by the other options, this option controls whether the data in the registry corresponding to the OpenShift Container Platform image API object is pruned. By default, image pruning processes both the image API objects and corresponding data in the registry. This option is useful when you are only concerned with removing etcd content, to reduce the number of image objects but are not concerned with cleaning up registry storage, or if you intend to do that separately by hard pruning the registry during an appropriate maintenance window for the registry. 14.6.1. Image prune conditions You can apply conditions to your manually pruned images. To remove any image managed by OpenShift Container Platform, or images with the annotation openshift.io/image.managed : Created at least --keep-younger-than minutes ago and are not currently referenced by any: Pods created less than --keep-younger-than minutes ago Image streams created less than --keep-younger-than minutes ago Running pods Pending pods Replication controllers Deployments Deployment configs Replica sets Build configurations Builds Jobs Cronjobs Stateful sets --keep-tag-revisions most recent items in stream.status.tags[].items That are exceeding the smallest limit defined in the same project and are not currently referenced by any: Running pods Pending pods Replication controllers Deployments Deployment configs Replica sets Build configurations Builds Jobs Cronjobs Stateful sets There is no support for pruning from external registries. When an image is pruned, all references to the image are removed from all image streams that have a reference to the image in status.tags . Image layers that are no longer referenced by any images are removed. Note The --prune-over-size-limit flag cannot be combined with the --keep-tag-revisions flag nor the --keep-younger-than flags. Doing so returns information that this operation is not allowed. Separating the removal of OpenShift Container Platform image API objects and image data from the registry by using --prune-registry=false , followed by hard pruning the registry, can narrow timing windows and is safer when compared to trying to prune both through one command. However, timing windows are not completely removed. For example, you can still create a pod referencing an image as pruning identifies that image for pruning. You should still keep track of an API object created during the pruning operations that might reference images so that you can mitigate any references to deleted content. Re-doing the pruning without the --prune-registry option or with --prune-registry=true does not lead to pruning the associated storage in the image registry for images previously pruned by --prune-registry=false . Any images that were pruned with --prune-registry=false can only be deleted from registry storage by hard pruning the registry. 14.6.2. Running the image prune operation Procedure To see what a pruning operation would delete: Keeping up to three tag revisions, and keeping resources (images, image streams, and pods) younger than 60 minutes: USD oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m Pruning every image that exceeds defined limits: USD oc adm prune images --prune-over-size-limit To perform the prune operation with the options from the step: USD oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm USD oc adm prune images --prune-over-size-limit --confirm 14.6.3. Using secure or insecure connections The secure connection is the preferred and recommended approach. It is done over HTTPS protocol with a mandatory certificate verification. The prune command always attempts to use it if possible. If it is not possible, in some cases it can fall-back to insecure connection, which is dangerous. In this case, either certificate verification is skipped or plain HTTP protocol is used. The fall-back to insecure connection is allowed in the following cases unless --certificate-authority is specified: The prune command is run with the --force-insecure option. The provided registry-url is prefixed with the http:// scheme. The provided registry-url is a local-link address or localhost . The configuration of the current user allows for an insecure connection. This can be caused by the user either logging in using --insecure-skip-tls-verify or choosing the insecure connection when prompted. Important If the registry is secured by a certificate authority different from the one used by OpenShift Container Platform, it must be specified using the --certificate-authority flag. Otherwise, the prune command fails with an error. 14.6.4. Image pruning problems Images not being pruned If your images keep accumulating and the prune command removes just a small portion of what you expect, ensure that you understand the image prune conditions that must apply for an image to be considered a candidate for pruning. Ensure that images you want removed occur at higher positions in each tag history than your chosen tag revisions threshold. For example, consider an old and obsolete image named sha256:abz . By running the following command in your namespace, where the image is tagged, the image is tagged three times in a single image stream named myapp : USD oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}'\ '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image "sha256:<hash>"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\n'\ '{{end}}{{end}}{{end}}{{end}}' Example output myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1 When default options are used, the image is never pruned because it occurs at position 0 in a history of myapp:v2.1-may-2016 tag. For an image to be considered for pruning, the administrator must either: Specify --keep-tag-revisions=0 with the oc adm prune images command. Warning This action removes all the tags from all the namespaces with underlying images, unless they are younger or they are referenced by objects younger than the specified threshold. Delete all the istags where the position is below the revision threshold, which means myapp:v2.1 and myapp:v2.1-may-2016 . Move the image further in the history, either by running new builds pushing to the same istag , or by tagging other image. This is not always desirable for old release tags. Tags having a date or time of a particular image's build in their names should be avoided, unless the image must be preserved for an undefined amount of time. Such tags tend to have just one image in their history, which prevents them from ever being pruned. Using a secure connection against insecure registry If you see a message similar to the following in the output of the oc adm prune images command, then your registry is not secured and the oc adm prune images client attempts to use a secure connection: error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client The recommended solution is to secure the registry. Otherwise, you can force the client to use an insecure connection by appending --force-insecure to the command; however, this is not recommended. Using an insecure connection against a secured registry If you see one of the following errors in the output of the oc adm prune images command, it means that your registry is secured using a certificate signed by a certificate authority other than the one used by oc adm prune images client for connection verification: error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response "\x15\x03\x01\x00\x02\x02" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response "\x15\x03\x01\x00\x02\x02"] By default, the certificate authority data stored in the user's configuration files is used; the same is true for communication with the master API. Use the --certificate-authority option to provide the right certificate authority for the container image registry server. Using the wrong certificate authority The following error means that the certificate authority used to sign the certificate of the secured container image registry is different from the authority used by the client: error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority Make sure to provide the right one with the flag --certificate-authority . As a workaround, the --force-insecure flag can be added instead. However, this is not recommended. Additional resources Accessing the registry Exposing the registry See Image Registry Operator in OpenShift Container Platform for information on how to create a registry route. 14.7. Hard pruning the registry The OpenShift Container Registry can accumulate blobs that are not referenced by the OpenShift Container Platform cluster's etcd. The basic pruning images procedure, therefore, is unable to operate on them. These are called orphaned blobs . Orphaned blobs can occur from the following scenarios: Manually deleting an image with oc delete image <sha256:image-id> command, which only removes the image from etcd, but not from the registry's storage. Pushing to the registry initiated by daemon failures, which causes some blobs to get uploaded, but the image manifest (which is uploaded as the very last component) does not. All unique image blobs become orphans. OpenShift Container Platform refusing an image because of quota restrictions. The standard image pruner deleting an image manifest, but is interrupted before it deletes the related blobs. A bug in the registry pruner, which fails to remove the intended blobs, causing the image objects referencing them to be removed and the blobs becoming orphans. Hard pruning the registry, a separate procedure from basic image pruning, allows cluster administrators to remove orphaned blobs. You should hard prune if you are running out of storage space in your OpenShift Container Registry and believe you have orphaned blobs. This should be an infrequent operation and is necessary only when you have evidence that significant numbers of new orphans have been created. Otherwise, you can perform standard image pruning at regular intervals, for example, once a day (depending on the number of images being created). Procedure To hard prune orphaned blobs from the registry: Log in. Log in to the cluster with the CLI as kubeadmin or another privileged user that has access to the openshift-image-registry namespace. Run a basic image prune . Basic image pruning removes additional images that are no longer needed. The hard prune does not remove images on its own. It only removes blobs stored in the registry storage. Therefore, you should run this just before the hard prune. Switch the registry to read-only mode. If the registry is not running in read-only mode, any pushes happening at the same time as the prune will either: fail and cause new orphans, or succeed although the images cannot be pulled (because some of the referenced blobs were deleted). Pushes will not succeed until the registry is switched back to read-write mode. Therefore, the hard prune must be carefully scheduled. To switch the registry to read-only mode: In configs.imageregistry.operator.openshift.io/cluster , set spec.readOnly to true : USD oc patch configs.imageregistry.operator.openshift.io/cluster -p '{"spec":{"readOnly":true}}' --type=merge Add the system:image-pruner role. The service account used to run the registry instances requires additional permissions to list some resources. Get the service account name: USD service_account=USD(oc get -n openshift-image-registry \ -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry) Add the system:image-pruner cluster role to the service account: USD oc adm policy add-cluster-role-to-user \ system:image-pruner -z \ USD{service_account} -n openshift-image-registry Optional: Run the pruner in dry-run mode. To see how many blobs would be removed, run the hard pruner in dry-run mode. No changes are actually made. The following example references an image registry pod called image-registry-3-vhndw : USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check' Alternatively, to get the exact paths for the prune candidates, increase the logging level: USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check' Example output time="2017-06-22T11:50:25.066156047Z" level=info msg="start prune (dry-run mode)" distribution_version="v2.4.1+unknown" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time="2017-06-22T11:50:25.092257421Z" level=info msg="Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:25.092395621Z" level=info msg="Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:25.092492183Z" level=info msg="Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.673946639Z" level=info msg="Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.674024531Z" level=info msg="Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time="2017-06-22T11:50:26.674675469Z" level=info msg="Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 ... Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data Run the hard prune. Execute the following command inside one running instance of a image-registry pod to run the hard prune. The following example references an image registry pod called image-registry-3-vhndw : USD oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete' Example output Deleted 13374 blobs Freed up 2.835 GiB of disk space Switch the registry back to read-write mode. After the prune is finished, the registry can be switched back to read-write mode. In configs.imageregistry.operator.openshift.io/cluster , set spec.readOnly to false : USD oc patch configs.imageregistry.operator.openshift.io/cluster -p '{"spec":{"readOnly":false}}' --type=merge 14.8. Pruning cron jobs Cron jobs can perform pruning of successful jobs, but might not properly handle failed jobs. Therefore, the cluster administrator should perform regular cleanup of jobs manually. They should also restrict the access to cron jobs to a small group of trusted users and set appropriate quota to prevent the cron job from creating too many jobs and pods. Additional resources Running tasks in pods using jobs Resource quotas across multiple projects Using RBAC to define and apply permissions | [
"oc adm prune <object_type> <options>",
"oc adm prune groups --sync-config=path/to/sync/config [<options>]",
"oc adm prune groups --sync-config=ldap-sync-config.yaml",
"oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm",
"oc adm prune deployments [<options>]",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"oc adm prune builds [<options>]",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"",
"oc create -f <filename>.yaml",
"kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner",
"oc adm prune images [<options>]",
"oc rollout restart deployment/image-registry -n openshift-image-registry",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m",
"oc adm prune images --prune-over-size-limit",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm",
"oc adm prune images --prune-over-size-limit --confirm",
"oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'",
"myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1",
"error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client",
"error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]",
"error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge",
"service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)",
"oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'",
"time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'",
"Deleted 13374 blobs Freed up 2.835 GiB of disk space",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/building_applications/pruning-objects |
Chapter 2. Recommended specifications for your large Red Hat OpenStack deployment | Chapter 2. Recommended specifications for your large Red Hat OpenStack deployment You can use the provided recommendations to scale your large cluster deployment. The values in the following procedures are based on testing that the Red Hat OpenStack Platform Performance & Scale Team performed and can vary according to individual environments. For more information, see Scaling Red Hat OpenStack Platform 16.1 to more than 700 nodes . 2.1. Undercloud system requirements For best performance, install the undercloud node on a physical server. However, if you use a virtualized undercloud node, ensure that the virtual machine has enough resources similar to a physical machine described in the following table. Table 2.1. Recommended specifications for the undercloud node System requirement Description Counts 1 CPUs 32 cores, 64 threads Disk 500 GB root disk (1x SSD or 2x hard drives with 7200RPM; RAID 1) 500 GB disk for Object Storage (swift) (1x SSD or 2x hard drives with 7200RPM; RAID 1) Memory 256 GB Network 25 Gbps network interfaces or 10 Gbps network interfaces 2.2. Overcloud Controller nodes system requirements All control plane services must run on exactly 3 nodes. Typically, all control plane services are deployed across 3 Controller nodes. Scaling controller services To increase the resources available for controller services, you can scale these services to additional nodes. For example, you can deploy the db or messaging controller services on dedicated nodes to reduce the load on the Controller nodes. To scale controller services, use composable roles to define the set of services that you want to scale. When you use composable roles, each service must run on exactly 3 additional dedicated nodes and the total number of nodes in the control plane must be odd to maintain Pacemaker quorum. The control plane in this example consists of the following 9 nodes: 3 Controller nodes 3 Database nodes 3 Messaging nodes For more information, see Composable services and custom roles in Director Installation and Usage . For questions about scaling controller services with composable roles, contact Red Hat Global Professional Services. Storage considerations Include sufficient storage when you plan Controller nodes in your overcloud deployment. OpenStack Telemetry Metrics (gnocchi) and OpenStack Image service (glance) services are I/O intensive. Use Ceph Storage and the Image service for telemetry because the overcloud moves the I/O load to the Ceph OSD servers. If your deployment does not include Ceph storage, use a dedicated disk or node for Object Storage (swift) that Telemetry Metrics (gnocchi) and Image (glance) services can use. If you use Object Storage on Controller nodes, use an NVMe device separate from the root disk to reduce disk utilization during object data storage. Extensive concurrent operations to the Block Storage service (cinder) that upload volumes to the Image Storage service (glance) as images puts considerable IO load on the controller disk. You can use SSD disks to provide a higher throughput. CPU considerations The number of API calls, AMQP messages, and database queries that the Controller nodes receive influences the CPU memory consumption on the Controller nodes. The ability of each Red Hat OpenStack Platform (RHOSP) component to concurrently process and perform tasks is also limited by the number of worker threads that are configured for each of the individual RHOSP components. The number of worker threads for components that RHOSP director configures on a Controller is limited by the CPU count. The following specifications are recommended for large scale environments with more than 700 nodes when you use Ceph Storage nodes in your deployment: Table 2.2. Recommended specifications for Controller nodes when you use Ceph Storage nodes System requirement Description Counts 3 Controller nodes with controller services contained within the Controller role. Optionally, to scale controller services on dedicated nodes, use composable services. For more information, see Composable services and customer roles in the Director installation and usage guide. CPUs 2 sockets each with 32 cores, 64 threads Disk 500 GB root disk (1x SSD or 2x hard drives with 7200RPM; RAID 1) 500GB dedicated disk for Swift (1x SSD or 1x NVMe) Memory 384 GB Network 25 Gbps network interfaces or 10 Gbps network interfaces. If you use 10 Gbps network interfaces, use network bonding to create two bonds: Provisioning (bond0 - mode4); Internal API (bond0 - mode4); Project (bond0 - mode4) Storage (bond1 - mode4); Storage management (bond1 - mode4) The following specifications are recommended for large scale environments with more than 700 nodes when you do not use Ceph Storage nodes in your deployment: Table 2.3. Recommended specifications for Controller nodes when you do not use Ceph Storage nodes System requirement Description Counts 3 Controller nodes with controller services contained within the Controller role. Optionally, to scale controller services on dedicated nodes, use composable services. For more information, see Composable services and customer roles in the Director installation and usage guide. CPUs 2 sockets each with 32 cores, 64 threads Disk 500GB root disk (1x SSD ) 500GB dedicated disk for Swift (1x SSD or 1x NVMe) Memory 384 GB Network 25 Gbps network interfaces or 10 Gbps network interfaces. If you use 10 Gbps network interfaces, use network bonding to create two bonds: Provisioning (bond0 - mode4); Internal API (bond0 - mode4); Project (bond0 - mode4) Storage (bond1 - mode4); Storage management (bond1 - mode4) 2.3. Overcloud Compute nodes system requirements When you plan your overcloud deployment, review the recommended system requirements for Compute nodes. Table 2.4. Recommended specifications for Compute nodes System requirement Description Counts Red Hat has tested a scale of 700 nodes with various composable compute roles. CPUs 2 sockets each with 12 cores, 24 threads Disk 500 GB root disk (1x SSD or 2x hard drives with 7200RPM; RAID 1) 500 GB disk for Image service (glance) image cache (1x SSD or 2x hard drives with 7200RPM; RAID 1) Memory 128 GB (64 GB per NUMA node); 2 GB is reserved for the host out by default. With Distributed Virtual Routing, increase the reserved RAM to 5 GB. Network 25 Gbps network interfaces or 10 Gbps network interfaces. If you use 10 Gbps network interfaces, use network bonding to create two bonds: Provisioning (bond0 - mode4); Internal API (bond0 - mode4); Project (bond0 - mode4) Storage (bond1 - mode4) 2.4. Red Hat Ceph Storage nodes system requirements When you plan your overcloud deployment, review the following recommended system requirements for Ceph storage nodes. Table 2.5. Recommended specifications for Ceph Storage nodes System requirement Description Counts You must have a minimum of 5 nodes with three-way replication. With all-flash configuration, you must have a minimum of 3 nodes with two-way replication. CPUs 1 Intel Broadwell CPU core per OSD to support storage I/O requirements. If you are using a light I/O workload, you might not need Ceph to run at the speed of your block devices. For example, for some NFV applications, Ceph supplies data durability, high availability, and low latency but throughput is not a target, so it is acceptable to supply less CPU power. Memory Ensure that you have 5 GB RAM per OSD. This is required for caching OSD data and metadata to optimize performance, not just for the OSD process memory. For hyper-converged infrastructure (HCI) environments, calculate the required memory in conjunction with the Compute node specifications. Network Ensure that the network capacity in MB/s is higher than the total MB/s capacity of the Ceph devices to support workloads that use a large I/O transfer size. Use a cluster network to lower write latency by shifting inter-OSD traffic onto a separate set of physical network ports. To do this in Red Hat OpenStack Platform, configure separate VLANs for networks and assign the VLANs to separate physical network interfaces. Disk Use Solid-State Drive (SSD) disks for the bluestore block.db partition to reduce I/O contention on hard disk drives (HDD), which increases the speed of write IOPS, but SSDs have zero effect on read input/output operations per second. If you use SATA/SAS SSD journals, you typically need a ratio of SSD:HDD of 1:5. If you use NVM SSD journals, you can typically use a SSD:HDD ratio of 1:10 or 1:15 if the workload is read-mostly. However, if this ratio is too high, the SSD journal device failure can affect the OSDs. For more information about hardware prerequisites for Ceph nodes, see General principles for selecting hardware in the Red Hat Storage 4 Hardware Guide . For more information about deployment configuration for Ceph nodes, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . For more information about changing the storage replication number, see Pools, placement groups, and CRUSH Configuration Reference in the Red Hat Ceph Storage Configuration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/recommendations_for_large_deployments/assembly-recommended-specifications-for-your-large-openstack-deployment_recommendations-large-deployments |
Chapter 8. Networking | Chapter 8. Networking Over time, Red Hat Enterprise Linux's network stack has been upgraded with numerous automated optimization features. For most workloads, the auto-configured network settings provide optimized performance. In most cases, networking performance problems are actually caused by a malfunction in hardware or faulty infrastructure. Such causes are beyond the scope of this document; the performance issues and solutions discussed in this chapter are useful in optimizing perfectly functional systems. Networking is a delicate subsystem, containing different parts with sensitive connections. This is why the open source community and Red Hat invest much work in implementing ways to automatically optimize network performance. As such, given most workloads, you may never even need to reconfigure networking for performance. 8.1. Network Performance Enhancements Red Hat Enterprise Linux 6.1 provided the following network performance enhancements: Receive Packet Steering (RPS) RPS enables a single NIC rx queue to have its receive softirq workload distributed among several CPUs. This helps prevent network traffic from being bottlenecked on a single NIC hardware queue. To enable RPS, specify the target CPU names in /sys/class/net/ ethX /queues/ rx-N /rps_cpus , replacing ethX with the NIC's corresponding device name (for example, eth1 , eth2 ) and rx-N with the specified NIC receive queue. This will allow the specified CPUs in the file to process data from queue rx-N on ethX . When specifying CPUs, consider the queue's cache affinity [4] . Receive Flow Steering RFS is an extension of RPS, allowing the administrator to configure a hash table that is populated automatically when applications receive data and are interrogated by the network stack. This determines which applications are receiving each piece of network data (based on source:destination network information). Using this information, the network stack can schedule the most optimal CPU to receive each packet. To configure RFS, use the following tunables: /proc/sys/net/core/rps_sock_flow_entries This controls the maximum number of sockets/flows that the kernel can steer towards any specified CPU. This is a system-wide, shared limit. /sys/class/net/ ethX /queues/ rx-N /rps_flow_cnt This controls the maximum number of sockets/flows that the kernel can steer for a specified receive queue ( rx-N ) on a NIC ( ethX ). Note that sum of all per-queue values for this tunable on all NICs should be equal or less than that of /proc/sys/net/core/rps_sock_flow_entries . Unlike RPS, RFS allows both the receive queue and the application to share the same CPU when processing packet flows. This can result in improved performance in some cases. However, such improvements are dependent on factors such as cache hierarchy, application load, and the like. getsockopt support for TCP thin-streams Thin-stream is a term used to characterize transport protocols wherein applications send data at such a low rate that the protocol's retransmission mechanisms are not fully saturated. Applications that use thin-stream protocols typically transport via reliable protocols like TCP; in most cases, such applications provide very time-sensitive services (for example, stock trading, online gaming, control systems). For time-sensitive services, packet loss can be devastating to service quality. To help prevent this, the getsockopt call has been enhanced to support two extra options: TCP_THIN_DUPACK This Boolean enables dynamic triggering of retransmissions after one dupACK for thin streams. TCP_THIN_LINEAR_TIMEOUTS This Boolean enables dynamic triggering of linear timeouts for thin streams. Both options are specifically activated by the application. For more information about these options, refer to file:///usr/share/doc/kernel-doc- version /Documentation/networking/ip-sysctl.txt . For more information about thin-streams, refer to file:///usr/share/doc/kernel-doc- version /Documentation/networking/tcp-thin.txt . Transparent Proxy (TProxy) support The kernel can now handle non-locally bound IPv4 TCP and UDP sockets to support transparent proxies. To enable this, you will need to configure iptables accordingly. You will also need to enable and configure policy routing properly. For more information about transparent proxies, refer to file:///usr/share/doc/kernel-doc- version /Documentation/networking/tproxy.txt . [4] Ensuring cache affinity between a CPU and a NIC means configuring them to share the same L2 cache. For more information, refer to Section 8.3, "Overview of Packet Reception" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/main-network |
Chapter 6. Fixed issues | Chapter 6. Fixed issues For a complete list of issues that have been fixed in the release, see link: AMQ Broker 7.10.0 Fixed Issues and see AMQ Broker - 7.10.x Resolved Issues for a list of issues that have been fixed in patch releases. | null | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/release_notes_for_red_hat_amq_broker_7.10/resolved |
Chapter 149. KafkaNodePoolTemplate schema reference | Chapter 149. KafkaNodePoolTemplate schema reference Used in: KafkaNodePoolSpec Property Property type Description podSet ResourceTemplate Template for Kafka StrimziPodSet resource. pod PodTemplate Template for Kafka Pods . perPodService ResourceTemplate Template for Kafka per-pod Services used for access from outside of OpenShift. perPodRoute ResourceTemplate Template for Kafka per-pod Routes used for access from outside of OpenShift. perPodIngress ResourceTemplate Template for Kafka per-pod Ingress used for access from outside of OpenShift. persistentVolumeClaim ResourceTemplate Template for all Kafka PersistentVolumeClaims . kafkaContainer ContainerTemplate Template for the Kafka broker container. initContainer ContainerTemplate Template for the Kafka init container. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaNodePoolTemplate-reference |
Chapter 31. General Updates | Chapter 31. General Updates The TAB key does not expand USDPWD by default When working in CLI in Red Hat Enterprise Linux 6, pressing the TAB key expanded USDPWD/ to the current directory. In Red Hat Enterprise Linux 7, CLI does not have the same behavior. Users can achieve this behavior by putting the following lines into the USDHOME/.bash_profile file: Upgrading from Red Hat Enterprise Linux 6 may fail on IBM Power Systems Because of a bug in the yaboot boot loader, upgrading from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 can fail on IBM Power Systems servers with an Unknown or corrupt filesystem error. This problem is typically caused by a misplaced yaboot.conf configuration file. Make sure that this file exists, that it is valid, and that it is placed on a standard (non-LVM) /boot partition. The /etc/os-release file contains outdated information after system upgrade Upgrading to the minor release (for example, from Red Hat Enterprise Linux 7.1 to 7.2) does not update the /etc/os-release file with the new product number. Instead, this file continues to list the release number, and a new file named os-release.rpmnew is placed in the /etc directory. If you require the /etc/os-release file to be up-to-date, replace it with /etc/os-release.rpmnew . | [
"if ((BASH_VERSINFO[0] >= 4)) && ((BASH_VERSINFO[1] >= 2)); then shopt -s direxpand fi"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/known-issues-general_updates |
Validation and troubleshooting | Validation and troubleshooting OpenShift Container Platform 4.12 Validating and troubleshooting an OpenShift Container Platform installation Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/validation_and_troubleshooting/index |
20.2. Using volume_key as an Individual User | 20.2. Using volume_key as an Individual User As an individual user, volume_key can be used to save encryption keys by using the following procedure. Note For all examples in this file, /path/to/volume is a LUKS device, not the plaintext device contained within. blkid -s type /path/to/volume should report type ="crypto_LUKS" . Procedure 20.1. Using volume_key Stand-alone Run: A prompt will then appear requiring an escrow packet passphrase to protect the key. Save the generated escrow-packet file, ensuring that the passphrase is not forgotten. If the volume passphrase is forgotten, use the saved escrow packet to restore access to the data. Procedure 20.2. Restore Access to Data with Escrow Packet Boot the system in an environment where volume_key can be run and the escrow packet is available (a rescue mode, for example). Run: A prompt will appear for the escrow packet passphrase that was used when creating the escrow packet, and for the new passphrase for the volume. Mount the volume using the chosen passphrase. To free up the passphrase slot in the LUKS header of the encrypted volume, remove the old, forgotten passphrase by using the command cryptsetup luksKillSlot . | [
"volume_key --save /path/to/volume -o escrow-packet",
"volume_key --restore /path/to/volume escrow-packet"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/volume_key-individual-user |
Chapter 11. PreprovisioningImage [metal3.io/v1alpha1] | Chapter 11. PreprovisioningImage [metal3.io/v1alpha1] Description PreprovisioningImage is the Schema for the preprovisioningimages API. Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PreprovisioningImageSpec defines the desired state of PreprovisioningImage. status object PreprovisioningImageStatus defines the observed state of PreprovisioningImage. 11.1.1. .spec Description PreprovisioningImageSpec defines the desired state of PreprovisioningImage. Type object Property Type Description acceptFormats array (string) acceptFormats is a list of acceptable image formats. architecture string architecture is the processor architecture for which to build the image. networkDataName string networkDataName is the name of a Secret in the local namespace that contains network data to build in to the image. 11.1.2. .status Description PreprovisioningImageStatus defines the observed state of PreprovisioningImage. Type object Property Type Description architecture string architecture is the processor architecture for which the image is built conditions array conditions describe the state of the built image conditions[] object Condition contains details for one aspect of the current state of this API Resource. extraKernelParams string extraKernelParams is a string with extra parameters to pass to the kernel when booting the image over network. Only makes sense for initrd images. format string format is the type of image that is available at the download url: either iso or initrd. imageUrl string imageUrl is the URL from which the built image can be downloaded. kernelUrl string kernelUrl is the URL from which the kernel of the image can be downloaded. Only makes sense for initrd images. networkData object networkData is a reference to the version of the Secret containing the network data used to build the image. 11.1.3. .status.conditions Description conditions describe the state of the built image Type array 11.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 11.1.5. .status.networkData Description networkData is a reference to the version of the Secret containing the network data used to build the image. Type object Property Type Description name string version string 11.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/preprovisioningimages GET : list objects of kind PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages DELETE : delete collection of PreprovisioningImage GET : list objects of kind PreprovisioningImage POST : create a PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name} DELETE : delete a PreprovisioningImage GET : read the specified PreprovisioningImage PATCH : partially update the specified PreprovisioningImage PUT : replace the specified PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name}/status GET : read status of the specified PreprovisioningImage PATCH : partially update status of the specified PreprovisioningImage PUT : replace status of the specified PreprovisioningImage 11.2.1. /apis/metal3.io/v1alpha1/preprovisioningimages HTTP method GET Description list objects of kind PreprovisioningImage Table 11.1. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImageList schema 401 - Unauthorized Empty 11.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages HTTP method DELETE Description delete collection of PreprovisioningImage Table 11.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PreprovisioningImage Table 11.3. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImageList schema 401 - Unauthorized Empty HTTP method POST Description create a PreprovisioningImage Table 11.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.5. Body parameters Parameter Type Description body PreprovisioningImage schema Table 11.6. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 202 - Accepted PreprovisioningImage schema 401 - Unauthorized Empty 11.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name} Table 11.7. Global path parameters Parameter Type Description name string name of the PreprovisioningImage HTTP method DELETE Description delete a PreprovisioningImage Table 11.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PreprovisioningImage Table 11.10. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PreprovisioningImage Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.12. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PreprovisioningImage Table 11.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.14. Body parameters Parameter Type Description body PreprovisioningImage schema Table 11.15. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 401 - Unauthorized Empty 11.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name}/status Table 11.16. Global path parameters Parameter Type Description name string name of the PreprovisioningImage HTTP method GET Description read status of the specified PreprovisioningImage Table 11.17. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PreprovisioningImage Table 11.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.19. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PreprovisioningImage Table 11.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.21. Body parameters Parameter Type Description body PreprovisioningImage schema Table 11.22. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/preprovisioningimage-metal3-io-v1alpha1 |
24.4. Parent Tags | 24.4. Parent Tags 24.4.1. Parent Tags An API user assigns a parent element to a tag to create a hierarchical link to a parent tag. The tags are presented as a flat collection, which descends from the root tag, with tag representations containing a link element to a parent tag Note The root tag is a special pseudo-tag assumed as the default parent tag if no parent tag is specified. The root tag cannot be deleted nor assigned a parent tag. This tag hierarchy is expressed in the following way: Example 24.5. Tag Hierarchy In this XML representation, the tags follow this hierarchy: 24.4.2. Setting a Parent Tag POST ing a new tag with a parent element creates an association with a parent tag, using either the id attribute or the name element to reference the parent tag Example 24.6. Setting an association with a parent tag with the id attribute Example 24.7. Setting an association with a parent tag with the name element 24.4.3. Changing a Parent Tag A tag changes a parent using a PUT request: Example 24.8. Changing the parent tag | [
"<tags> <tag id=\"-1\" href=\"/ovirt-engine/api/tags/-1\"> <name>root</name> <description>root</description> <parent> <tag id=\"-1\" href=\"/ovirt-engine/api/tags/-1\"/> </parent> </tag> <tag id=\"f436ebfc-67f2-41bd-8ec6-902b6f7dcb5e\" href=\"/ovirt-engine/api/tags/f436ebfc-67f2-41bd-8ec6-902b6f7dcb5e\"> <name>Finance</name> <description>Resources for the Finance department</description> <parent> <tag id=\"-1\" href=\"/ovirt-engine/api/tags/-1\"/> </parent> </tag> <tag id=\"ac18dabf-23e5-12be-a383-a38b165ca7bd\" href=\"/ovirt-engine/api/tags/ac18dabf-23e5-12be-a383-a38b165ca7bd\"> <name>Billing</name> <description>Billing Resources</description> <parent> <tag id=\"f436ebfc-67f2-41bd-8ec6-902b6f7dcb5e\" href=\"/ovirt-engine/api/tags/f436ebfc-67f2-41bd-8ec6-902b6f7dcb5e\"/> </parent> </tag> </tags>",
"root (id: -1) - Finance (id: f436ebfc-67f2-41bd-8ec6-902b6f7dcb5e) - Billing (id: ac18dabf-23e5-12be-a383-a38b165ca7bd)",
"POST /ovirt-engine/api/vms/5114bb3e-a4e6-44b2-b783-b3eea7d84720/tags HTTP/1.1 Accept: application/xml Content-Type: application/xml HTTP/1.1 200 OK Content-Type: application/xml <tag> <name>Billing</name> <description>Billing Resources</description> <parent> <tag id=\"f436ebfc-67f2-41bd-8ec6-902b6f7dcb5e\"/> </parent> </tag>",
"POST /ovirt-engine/api/vms/5114bb3e-a4e6-44b2-b783-b3eea7d84720/tags HTTP/1.1 Accept: application/xml Content-Type: application/xml HTTP/1.1 200 OK Content-Type: application/xml <tag> <name>Billing</name> <description>Billing Resources</description> <parent> <tag> <name>Finance</name> </tag> </parent> </tag>",
"PUT /ovirt-engine/api/tags/ac18dabf-23e5-12be-a383-a38b165ca7bd HTTP/1.1 Accept: application/xml Content-Type: application/xml <tag> <parent> <tag id=\"f436ebfc-67f2-41bd-8ec6-902b6f7dcb5e\"/> </parent> </tag>"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-parent_tags |
Chapter 9. AWS SQS Sink | Chapter 9. AWS SQS Sink Send message to an AWS SQS Queue 9.1. Configuration Options The following table summarizes the configuration options available for the aws-sqs-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string queueNameOrArn * Queue Name The SQS Queue name or ARN string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string autoCreateQueue Autocreate Queue Setting the autocreation of the SQS queue. boolean false Note Fields marked with an asterisk (*) are mandatory. 9.2. Dependencies At runtime, the aws-sqs-sink Kamelet relies upon the presence of the following dependencies: camel:aws2-sqs camel:core camel:kamelet 9.3. Usage This section describes how you can use the aws-sqs-sink . 9.3.1. Knative Sink You can use the aws-sqs-sink Kamelet as a Knative sink by binding it to a Knative object. aws-sqs-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 9.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 9.3.1.2. Procedure for using the cluster CLI Save the aws-sqs-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-sink-binding.yaml 9.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 9.3.2. Kafka Sink You can use the aws-sqs-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-sqs-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 9.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 9.3.2.2. Procedure for using the cluster CLI Save the aws-sqs-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-sink-binding.yaml 9.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 9.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-sqs-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-sqs-sink-binding.yaml",
"kamel bind channel:mychannel aws-sqs-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-sqs-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\""
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/aws-sqs-sink |
Chapter 17. Network [config.openshift.io/v1] | Chapter 17. Network [config.openshift.io/v1] Description Network holds cluster-wide information about Network. The canonical name is cluster . It is used to configure the desired network configuration, such as: IP address pools for services/pod IPs, network plugin, etc. Please view network.spec for an explanation on what applies when configuring this resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration. As a general rule, this SHOULD NOT be read directly. Instead, you should consume the NetworkStatus, as it indicates the currently deployed configuration. Currently, most spec fields are immutable after installation. Please view the individual ones for further details on each. status object status holds observed values from the cluster. They may not be overridden. 17.1.1. .spec Description spec holds user settable values for configuration. As a general rule, this SHOULD NOT be read directly. Instead, you should consume the NetworkStatus, as it indicates the currently deployed configuration. Currently, most spec fields are immutable after installation. Please view the individual ones for further details on each. Type object Property Type Description clusterNetwork array IP address pool to use for pod IPs. This field is immutable after installation. clusterNetwork[] object ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. externalIP object externalIP defines configuration for controllers that affect Service.ExternalIP. If nil, then ExternalIP is not allowed to be set. networkDiagnostics object networkDiagnostics defines network diagnostics configuration. Takes precedence over spec.disableNetworkDiagnostics in network.operator.openshift.io. If networkDiagnostics is not specified or is empty, and the spec.disableNetworkDiagnostics flag in network.operator.openshift.io is set to true, the network diagnostics feature will be disabled. networkType string NetworkType is the plugin that is to be deployed (e.g. OVNKubernetes). This should match a value that the cluster-network-operator understands, or else no networking will be installed. Currently supported values are: - OVNKubernetes This field is immutable after installation. serviceNetwork array (string) IP address pool for services. Currently, we only support a single entry here. This field is immutable after installation. serviceNodePortRange string The port range allowed for Services of type NodePort. If not specified, the default of 30000-32767 will be used. Such Services without a NodePort specified will have one automatically allocated from this range. This parameter can be updated after the cluster is installed. 17.1.2. .spec.clusterNetwork Description IP address pool to use for pod IPs. This field is immutable after installation. Type array 17.1.3. .spec.clusterNetwork[] Description ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. Type object Property Type Description cidr string The complete block for pod IPs. hostPrefix integer The size (prefix) of block to allocate to each node. If this field is not used by the plugin, it can be left unset. 17.1.4. .spec.externalIP Description externalIP defines configuration for controllers that affect Service.ExternalIP. If nil, then ExternalIP is not allowed to be set. Type object Property Type Description autoAssignCIDRs array (string) autoAssignCIDRs is a list of CIDRs from which to automatically assign Service.ExternalIP. These are assigned when the service is of type LoadBalancer. In general, this is only useful for bare-metal clusters. In Openshift 3.x, this was misleadingly called "IngressIPs". Automatically assigned External IPs are not affected by any ExternalIPPolicy rules. Currently, only one entry may be provided. policy object policy is a set of restrictions applied to the ExternalIP field. If nil or empty, then ExternalIP is not allowed to be set. 17.1.5. .spec.externalIP.policy Description policy is a set of restrictions applied to the ExternalIP field. If nil or empty, then ExternalIP is not allowed to be set. Type object Property Type Description allowedCIDRs array (string) allowedCIDRs is the list of allowed CIDRs. rejectedCIDRs array (string) rejectedCIDRs is the list of disallowed CIDRs. These take precedence over allowedCIDRs. 17.1.6. .spec.networkDiagnostics Description networkDiagnostics defines network diagnostics configuration. Takes precedence over spec.disableNetworkDiagnostics in network.operator.openshift.io. If networkDiagnostics is not specified or is empty, and the spec.disableNetworkDiagnostics flag in network.operator.openshift.io is set to true, the network diagnostics feature will be disabled. Type object Property Type Description mode string mode controls the network diagnostics mode When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. The current default is All. sourcePlacement object sourcePlacement controls the scheduling of network diagnostics source deployment See NetworkDiagnosticsSourcePlacement for more details about default values. targetPlacement object targetPlacement controls the scheduling of network diagnostics target daemonset See NetworkDiagnosticsTargetPlacement for more details about default values. 17.1.7. .spec.networkDiagnostics.sourcePlacement Description sourcePlacement controls the scheduling of network diagnostics source deployment See NetworkDiagnosticsSourcePlacement for more details about default values. Type object Property Type Description nodeSelector object (string) nodeSelector is the node selector applied to network diagnostics components When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. The current default is kubernetes.io/os: linux . tolerations array tolerations is a list of tolerations applied to network diagnostics components When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. The current default is an empty list. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 17.1.8. .spec.networkDiagnostics.sourcePlacement.tolerations Description tolerations is a list of tolerations applied to network diagnostics components When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. The current default is an empty list. Type array 17.1.9. .spec.networkDiagnostics.sourcePlacement.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 17.1.10. .spec.networkDiagnostics.targetPlacement Description targetPlacement controls the scheduling of network diagnostics target daemonset See NetworkDiagnosticsTargetPlacement for more details about default values. Type object Property Type Description nodeSelector object (string) nodeSelector is the node selector applied to network diagnostics components When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. The current default is kubernetes.io/os: linux . tolerations array tolerations is a list of tolerations applied to network diagnostics components When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. The current default is - operator: "Exists" which means that all taints are tolerated. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 17.1.11. .spec.networkDiagnostics.targetPlacement.tolerations Description tolerations is a list of tolerations applied to network diagnostics components When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. The current default is - operator: "Exists" which means that all taints are tolerated. Type array 17.1.12. .spec.networkDiagnostics.targetPlacement.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 17.1.13. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description clusterNetwork array IP address pool to use for pod IPs. clusterNetwork[] object ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. clusterNetworkMTU integer ClusterNetworkMTU is the MTU for inter-pod networking. conditions array conditions represents the observations of a network.config current state. Known .status.conditions.type are: "NetworkDiagnosticsAvailable" conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } migration object Migration contains the cluster network migration configuration. networkType string NetworkType is the plugin that is deployed (e.g. OVNKubernetes). serviceNetwork array (string) IP address pool for services. Currently, we only support a single entry here. 17.1.14. .status.clusterNetwork Description IP address pool to use for pod IPs. Type array 17.1.15. .status.clusterNetwork[] Description ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. Type object Property Type Description cidr string The complete block for pod IPs. hostPrefix integer The size (prefix) of block to allocate to each node. If this field is not used by the plugin, it can be left unset. 17.1.16. .status.conditions Description conditions represents the observations of a network.config current state. Known .status.conditions.type are: "NetworkDiagnosticsAvailable" Type array 17.1.17. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 17.1.18. .status.migration Description Migration contains the cluster network migration configuration. Type object Property Type Description mtu object MTU is the MTU configuration that is being deployed. networkType string NetworkType is the target plugin that is being deployed. DEPRECATED: network type migration is no longer supported, so this should always be unset. 17.1.19. .status.migration.mtu Description MTU is the MTU configuration that is being deployed. Type object Property Type Description machine object Machine contains MTU migration configuration for the machine's uplink. network object Network contains MTU migration configuration for the default network. 17.1.20. .status.migration.mtu.machine Description Machine contains MTU migration configuration for the machine's uplink. Type object Property Type Description from integer From is the MTU to migrate from. to integer To is the MTU to migrate to. 17.1.21. .status.migration.mtu.network Description Network contains MTU migration configuration for the default network. Type object Property Type Description from integer From is the MTU to migrate from. to integer To is the MTU to migrate to. 17.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/networks DELETE : delete collection of Network GET : list objects of kind Network POST : create a Network /apis/config.openshift.io/v1/networks/{name} DELETE : delete a Network GET : read the specified Network PATCH : partially update the specified Network PUT : replace the specified Network 17.2.1. /apis/config.openshift.io/v1/networks HTTP method DELETE Description delete collection of Network Table 17.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Network Table 17.2. HTTP responses HTTP code Reponse body 200 - OK NetworkList schema 401 - Unauthorized Empty HTTP method POST Description create a Network Table 17.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.4. Body parameters Parameter Type Description body Network schema Table 17.5. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 202 - Accepted Network schema 401 - Unauthorized Empty 17.2.2. /apis/config.openshift.io/v1/networks/{name} Table 17.6. Global path parameters Parameter Type Description name string name of the Network HTTP method DELETE Description delete a Network Table 17.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 17.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Network Table 17.9. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Network Table 17.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Network Table 17.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.13. Body parameters Parameter Type Description body Network schema Table 17.14. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/network-config-openshift-io-v1 |
9.4. What Are dconf Profiles? | 9.4. What Are dconf Profiles? A profile is a list of system's hardware and software configuration databases, which the dconf system collects. dconf profiles allow you to compare identical systems to troubleshoot hardware or software problems. The dconf system stores its profiles in text files. The USDDCONF_PROFILE environment variable can specify a relative path to the file from the /etc/dconf/profile/ directory, or an absolute path, such as in a user's home directory. Key pairs which are set in a dconf profile will override the default settings unless there is a problem with the value that you have set. 9.4.1. Selecting a dconf Profile On startup, dconf consults the USDDCONF_PROFILE environment variable whether the variable is set. If set, dconf attempts to open the named profile and aborts if this step fails. As long as the environment variable is not set, dconf attempts to open the profile named user . Provided this step still fails, dconf falls back to an internal hard-wired configuration. Each line in a profile specifies one dconf database. The first line indicates the database used to write changes whereas the remaining lines show read-only databases. The following is a sample profile stored in /etc/dconf/profile/user : This sample profile specifies three databases: user is the name of the user database which can normally be found in ~/.config/dconf , and local and site are system databases, located in /etc/dconf/db/ . Important The dconf profile for a session is determined at login, so users will have to log out and log in to apply a new dconf user profile to their session. | [
"user-db: user system-db: local system-db: site"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/profiles |
Chapter 2. Installing Datadog for Ceph integration | Chapter 2. Installing Datadog for Ceph integration After installing the Datadog agent, configure the Datadog agent to report Ceph metrics to Datadog. Prerequisites Root-level access to the Ceph monitor node. Appropriate Ceph key providing access to the Red Hat Ceph Storage cluster. Internet access. Procedure Install the Ceph integration. Log in to the Datadog App . The user interface will present navigation on the left side of the screen. Click Integrations . Either enter ceph into the search field or scroll to find the Ceph integration. The user interface will present whether the Ceph integration is available or already installed . If it is available , click the button to install it. Configuring the Datadog agent for Ceph Navigate to the Datadog Agent configuration directory: Create a ceph.yaml file from the ceph.yml.sample file: Modify the ceph.yaml file: Example The following is a sample of what the modified ceph.yaml file looks like. Uncomment the -tags , -name , ceph_command , ceph_cluster , and use_sudo: True lines. The default values for ceph_command and ceph_cluster are /usr/bin/ceph and ceph respectively. When complete, it will look like this: Modify the sudoers file: Add the following line: Enable the Datadog agent so that it will restart if the Ceph host reboots: Restart the Datadog agent: | [
"cd /etc/dd-agent/conf.d",
"cp ceph.yaml.example ceph.yaml",
"vim ceph.yaml",
"init_config: instances: - tags: - name:mars_cluster # ceph_cmd: /usr/bin/ceph ceph_cluster: ceph # If your environment requires sudo, please add a line like: dd-agent ALL=(ALL) NOPASSWD:/usr/bin/ceph to your sudoers file, and uncomment the below option. # use_sudo: True",
"init_config: instances: - tags: - name:ceph-RHEL # ceph_cmd: /usr/bin/ceph ceph_cluster: ceph # If your environment requires sudo, please add a line like: dd-agent ALL=(ALL) NOPASSWD:/usr/bin/ceph to your sudoers file, and uncomment the below option. # use_sudo: True",
"visudo",
"dd-agent ALL=(ALL) NOPASSWD:/usr/bin/ceph",
"systemctl enable datadog-agent",
"systemctl status datadog-agent"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/monitoring_ceph_with_datadog_guide/installing-datadog-for-ceph-integration_datadog |
Part I. Programmable APIs | Part I. Programmable APIs Red Hat JBoss Data Grid provides the following programmable APIs: Cache Batching Grouping Persistence (formerly CacheStore) ConfigurationBuilder Externalizable Notification (also known as the Listener API because it deals with Notifications and Listeners) Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/part-programmable_apis |
Chapter 10. Monitoring project and application metrics using the Developer perspective | Chapter 10. Monitoring project and application metrics using the Developer perspective The Observe view in the Developer perspective provides options to monitor your project or application metrics, such as CPU, memory, and bandwidth usage, and network related information. 10.1. Prerequisites You have created and deployed applications on OpenShift Container Platform . You have logged in to the web console and have switched to the Developer perspective . 10.2. Monitoring your project metrics After you create applications in your project and deploy them, you can use the Developer perspective in the web console to see the metrics for your project. Procedure On the left navigation panel of the Developer perspective, click Observe to see the Dashboard , Metrics , Alerts , and Events for your project. Optional: Use the Dashboard tab to see graphs depicting the following application metrics: CPU usage Memory usage Bandwidth consumption Network-related information such as the rate of transmitted and received packets and the rate of dropped packets. In the Dashboard tab, you can access the Kubernetes compute resources dashboards. Figure 10.1. Observe dashboard Note In the Dashboard list, Kubernetes / Compute Resources / Namespace (Pods) dashboard is selected by default. Use the following options to see further details: Select a dashboard from the Dashboard list to see the filtered metrics. All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Select an option from the Time Range list to determine the time frame for the data being captured. Set a custom time range by selecting Custom time range from the Time Range list. You can input or select the From and To dates and times. Click Save to save the custom time range. Select an option from the Refresh Interval list to determine the time period after which the data is refreshed. Hover your cursor over the graphs to see specific details for your pod. Click Inspect located in the upper-right corner of every graph to see any particular graph details. The graph details appear in the Metrics tab. Optional: Use the Metrics tab to query for the required project metric. Figure 10.2. Monitoring metrics In the Select Query list, select an option to filter the required details for your project. The filtered metrics for all the application pods in your project are displayed in the graph. The pods in your project are also listed below. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result. Click Show PromQL to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace. Use the drop-down list to set a time range for the data being displayed. You can click Reset Zoom to reset it to the default time range. Optional: In the Select Query list, select Custom Query to create a custom Prometheus query and filter relevant metrics. Optional: Use the Alerts tab to do the following tasks: See the rules that trigger alerts for the applications in your project. Identify the alerts firing in the project. Silence such alerts if required. Figure 10.3. Monitoring alerts Use the following options to see further details: Use the Filter list to filter the alerts by their Alert State and Severity . Click on an alert to go to the details page for that alert. In the Alerts Details page, you can click View Metrics to see the metrics for the alert. Use the Notifications toggle adjoining an alert rule to silence all the alerts for that rule, and then select the duration for which the alerts will be silenced from the Silence for list. You must have the permissions to edit alerts to see the Notifications toggle. Use the Options menu adjoining an alert rule to see the details of the alerting rule. Optional: Use the Events tab to see the events for your project. Figure 10.4. Monitoring events You can filter the displayed events using the following options: In the Resources list, select a resource to see events for that resource. In the All Types list, select a type of event to see events relevant to that type. Search for specific events using the Filter events by names or messages field. 10.3. Monitoring your application metrics After you create applications in your project and deploy them, you can use the Topology view in the Developer perspective to see the alerts and metrics for your application. Critical and warning alerts for your application are indicated on the workload node in the Topology view. Procedure To see the alerts for your workload: In the Topology view, click the workload to see the workload details in the right panel. Click the Observe tab to see the critical and warning alerts for the application; graphs for metrics, such as CPU, memory, and bandwidth usage; and all the events for the application. Note Only critical and warning alerts in the Firing state are displayed in the Topology view. Alerts in the Silenced , Pending and Not Firing states are not displayed. Figure 10.5. Monitoring application metrics Click the alert listed in the right panel to see the alert details in the Alert Details page. Click any of the charts to go to the Metrics tab to see the detailed metrics for the application. Click View monitoring dashboard to see the monitoring dashboard for that application. 10.4. Additional resources Monitoring overview | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/building_applications/odc-monitoring-project-and-application-metrics-using-developer-perspective |
Chapter 2. Initializing InstructLab | Chapter 2. Initializing InstructLab You must initialize the InstructLab environments to begin working with the Red Hat Enterprise Linux AI models. 2.1. Creating your RHEL AI environment You can start interacting with LLMs and the RHEL AI tooling by initializing the InstructLab environment. Important System profiles for AMD machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You installed RHEL AI with the bootable container image. You have root user access on your machine. Procedure Optional: You can view your machine's information by running the following command: USD ilab system info Initialize InstructLab by running the following command: USD ilab config init The RHEL AI CLI starts setting up your environment and config.yaml file. The CLI automatically detects your machine's hardware and selects a system profile based on the GPU types. System profiles populate the config.yaml file with the proper parameter values based on your detected hardware. Example output of profile auto-detection Generating config file and profiles: /home/user/.config/instructlab/config.yaml /home/user/.local/share/instructlab/internal/system_profiles/ We have detected the NVIDIA H100 X4 profile as an exact match for your system. -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! -------------------------------------------- If the CLI does not detect an exact match for your system, you can manually select a system profile when prompted. Select your hardware vendor and configuration that matches your system. Example output of selecting system profiles Please choose a system profile to use. System profiles apply to all parts of the config file and set hardware specific defaults for each command. First, please select the hardware vendor your system falls into [0] NO SYSTEM PROFILE [1] NVIDIA Enter the number of your choice [0]: 4 You selected: NVIDIA , please select the specific hardware configuration that most closely matches your system. [0] No system profile [1] NVIDIA H100 X2 [2] NVIDIA H100 X8 [3] NVIDIA H100 X4 [4] NVIDIA L4 X8 [5] NVIDIA A100 X2 [6] NVIDIA A100 X8 [7] NVIDIA A100 X4 [8] NVIDIA L40S X4 [9] NVIDIA L40S X8 Enter the number of your choice [hit enter for hardware defaults] [0]: 3 Example output of a completed ilab config init run. You selected: /Users/<user>/.local/share/instructlab/internal/system_profiles/nvidia/H100/h100_x4.yaml -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! -------------------------------------------- If you want to use the skeleton taxonomy tree, which includes two skills and one knowledge qna.yaml file, you can clone the skeleton repository and place it in the taxonomy directory by running the following command: rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/ If the incorrect system profile is auto-detected, you can run the following command: USD ilab config init --profile <path-to-system-profile> where <path-to-system-profile> Specify the path to the correct system profile. You can find the system profiles in the ~/.local/share/instructlab/internal/system_profiles path. Example profile selection command USD ilab config init --profile ~/.local/share/instructlab/internal/system_profiles/amd/mi300x/mi300x_x8.yaml Directory structure of the InstructLab environment 1 ~/.config/instructlab/config.yaml : Contains the config.yaml file. 2 ~/.cache/instructlab/models/ : Contains all downloaded large language models, including the saved output of ones you generate with RHEL AI. 3 ~/.local/share/instructlab/datasets/ : Contains data output from the SDG phase, built on modifications to the taxonomy repository. 4 ~/.local/share/instructlab/taxonomy/ : Contains the skill and knowledge data. 5 ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ : Contains the output of the multi-phase training process Verification You can view the full config.yaml file by running the following command USD ilab config show You can also manually edit the config.yaml file by running the following command: USD ilab config edit | [
"ilab system info",
"ilab config init",
"Generating config file and profiles: /home/user/.config/instructlab/config.yaml /home/user/.local/share/instructlab/internal/system_profiles/ We have detected the NVIDIA H100 X4 profile as an exact match for your system. -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! --------------------------------------------",
"Please choose a system profile to use. System profiles apply to all parts of the config file and set hardware specific defaults for each command. First, please select the hardware vendor your system falls into [0] NO SYSTEM PROFILE [1] NVIDIA Enter the number of your choice [0]: 4 You selected: NVIDIA Next, please select the specific hardware configuration that most closely matches your system. [0] No system profile [1] NVIDIA H100 X2 [2] NVIDIA H100 X8 [3] NVIDIA H100 X4 [4] NVIDIA L4 X8 [5] NVIDIA A100 X2 [6] NVIDIA A100 X8 [7] NVIDIA A100 X4 [8] NVIDIA L40S X4 [9] NVIDIA L40S X8 Enter the number of your choice [hit enter for hardware defaults] [0]: 3",
"You selected: /Users/<user>/.local/share/instructlab/internal/system_profiles/nvidia/H100/h100_x4.yaml -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! --------------------------------------------",
"rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/",
"ilab config init --profile <path-to-system-profile>",
"ilab config init --profile ~/.local/share/instructlab/internal/system_profiles/amd/mi300x/mi300x_x8.yaml",
"├─ ~/.config/instructlab/config.yaml 1 ├─ ~/.cache/instructlab/models/ 2 ├─ ~/.local/share/instructlab/datasets 3 ├─ ~/.local/share/instructlab/taxonomy 4 ├─ ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ 5",
"ilab config show",
"ilab config edit"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/building_and_maintaining_your_rhel_ai_environment/initializing_instructlab |
Chapter 6. opm CLI | Chapter 6. opm CLI 6.1. Installing the opm CLI 6.1.1. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. Additional resources See Operator Framework packaging format for more information about the bundle format. To create a bundle image using the Operator SDK, see Working with bundle images . 6.1.2. Installing the opm CLI You can install the opm CLI tool on your Linux, macOS, or Windows workstation. Prerequisites For Linux, you must provide the following packages. RHEL 8 meets these requirements: podman version 1.9.3+ (version 2.0+ recommended) glibc version 2.28+ Procedure Navigate to the OpenShift mirror site and download the latest version of the tarball that matches your operating system. Unpack the archive. For Linux or macOS: USD tar xvf <file> For Windows, unzip the archive with a ZIP program. Place the file anywhere in your PATH . For Linux or macOS: Check your PATH : USD echo USDPATH Move the file. For example: USD sudo mv ./opm /usr/local/bin/ For Windows: Check your PATH : C:\> path Move the file: C:\> move opm.exe <directory> Verification After you install the opm CLI, verify that it is available: USD opm version Example output Version: version.Version{OpmVersion:"v1.18.0", GitCommit:"32eb2591437e394bdc58a58371c5cd1e6fe5e63f", BuildDate:"2021-09-21T10:41:00Z", GoOs:"linux", GoArch:"amd64"} 6.1.3. Additional resources See Managing custom catalogs for opm procedures including creating, updating, and pruning catalogs. 6.2. opm CLI reference The opm command-line interface (CLI) is a tool for creating and maintaining Operator catalogs. opm CLI syntax USD opm <command> [<subcommand>] [<argument>] [<flags>] Table 6.1. Global flags Flag Description --skip-tls Skip TLS certificate verification for container image registries while pulling bundles or indexes. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 6.2.1. index Generate Operator index container images from pre-existing Operator bundles. Command syntax USD opm index <subcommand> [<flags>] Table 6.2. index subcommands Subcommand Description add Add Operator bundles to an index. export Export an Operator from an index in the appregistry format. prune Prune an index of all but specified packages. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. rm Delete an entire Operator from an index. 6.2.1.1. add Add Operator bundles to an index. Command syntax USD opm index add [<flags>] Table 6.3. index add flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -b , --bundles (strings) Comma-separated list of bundles to add. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to add to. --generate If enabled, only creates the Dockerfile and saves it to local disk. --mode (string) Graph update mode that defines how channel graphs are updated: replaces (the default value), semver , or semver-skippatch . -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. 6.2.1.2. export Export an Operator from an index in the appregistry format. Command syntax USD opm index export [<flags>] Table 6.4. index export flags Flag Description -i , --index (string) Index to get the packages from. -f , --download-folder (string) Directory where the downloaded Operator bundles are stored. The default directory is downloaded . -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -h , --help Help for the export command. -p , --package (string) Comma-separated list of packages to export. 6.2.1.3. prune Prune an index of all but specified packages. Command syntax USD opm index prune [<flags>] Table 6.5. index prune flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 6.2.1.4. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. Command syntax USD opm index prune-stranded [<flags>] Table 6.6. index prune-stranded flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 6.2.1.5. rm Delete an entire Operator from an index. Command syntax USD opm index rm [<flags>] Table 6.7. index rm flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to delete from. --generate If enabled, only creates the Dockerfile and saves it to local disk. -o , --operators (strings) Comma-separated list of Operators to delete. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. 6.2.2. init Generate an olm.package declarative config blob. Command syntax USD opm init <package_name> [<flags>] Table 6.8. init flags Flag Description -c , --default-channel (string) The channel that subscriptions will default to if unspecified. -d , --description (string) Path to the Operator's README.md or other documentation. -i , --icon (string) Path to package's icon. -o , --output (string) Output format: json (the default value) or yaml . 6.2.3. render Generate a declarative config blob from the provided index images, bundle images, and SQLite database files. Command syntax USD opm render <index_image | bundle_image | sqlite_file> [<flags>] Table 6.9. render flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 6.2.4. validate Validate the declarative config JSON file(s) in a given directory. Command syntax USD opm validate <directory> [<flags>] 6.2.5. serve Serve declarative configs via a GRPC server. Note The declarative config directory is loaded by the serve command at startup. Changes made to the declarative config after this command starts are not reflected in the served content. Command syntax USD opm serve <source_path> [<flags>] Table 6.10. serve flags Flag Description --debug Enable debug logging. -p , --port (string) Port number to serve on. Default: 50051 . -t , --termination-log (string) Path to a container termination log file. Default: /dev/termination-log . | [
"tar xvf <file>",
"echo USDPATH",
"sudo mv ./opm /usr/local/bin/",
"C:\\> path",
"C:\\> move opm.exe <directory>",
"opm version",
"Version: version.Version{OpmVersion:\"v1.18.0\", GitCommit:\"32eb2591437e394bdc58a58371c5cd1e6fe5e63f\", BuildDate:\"2021-09-21T10:41:00Z\", GoOs:\"linux\", GoArch:\"amd64\"}",
"opm <command> [<subcommand>] [<argument>] [<flags>]",
"opm index <subcommand> [<flags>]",
"opm index add [<flags>]",
"opm index export [<flags>]",
"opm index prune [<flags>]",
"opm index prune-stranded [<flags>]",
"opm index rm [<flags>]",
"opm init <package_name> [<flags>]",
"opm render <index_image | bundle_image | sqlite_file> [<flags>]",
"opm validate <directory> [<flags>]",
"opm serve <source_path> [<flags>]"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/cli_tools/opm-cli |
Chapter 6. View OpenShift Data Foundation Topology | Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_vmware_vsphere/viewing-odf-topology_rhodf |
18.2.4. Other Ports | 18.2.4. Other Ports The Security Level Configuration Tool includes an Other ports section for specifying custom IP ports as being trusted by iptables . For example, to allow IRC and Internet printing protocol (IPP) to pass through the firewall, add the following to the Other ports section: 194:tcp,631:tcp | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s2-basic-firewall-securitylevel-other |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_quarkus_reference/pr01 |
Chapter 22. Directory-entry (dentry) Tapset | Chapter 22. Directory-entry (dentry) Tapset This family of functions is used to map kernel VFS directory entry pointers to file or full path names. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/dentry-dot-stp |
7.3. Pools | 7.3. Pools Virtual machine pools allow for rapid provisioning of numerous identical virtual machines to users as desktops. Users who have been granted permission to access and use virtual machines from a pool receive an available virtual machine based on their position in a queue of requests. Virtual machines in a pool do not allow data persistence; each time a virtual machine is assigned from a pool, it is allocated in its base state. This is ideally suited to be used in situations where user data is stored centrally. Virtual machine pools are created from a template. Each virtual machine in a pool uses the same backing read-only image, and uses a temporary copy-on-write image to hold changed and newly generated data. Virtual machines in a pool are different from other virtual machines in that the copy-on-write layer that holds user-generated and -changed data is lost at shutdown. The implication of this is that a virtual machine pool requires no more storage than the template that backs it, plus some space for data generated or changed during use. Virtual machine pools are an efficient way to provide computing power to users for some tasks without the storage cost of providing each user with a dedicated virtual desktop. Example 7.1. Example Pool Usage A technical support company employs 10 help desk staff. However, only five are working at any given time. Instead of creating ten virtual machines, one for each help desk employee, a pool of five virtual machines can be created. Help desk employees allocate themselves a virtual machine at the beginning of their shift and return it to the pool at the end. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/pools1 |
probe::signal.check_ignored.return | probe::signal.check_ignored.return Name probe::signal.check_ignored.return - Check to see signal is ignored completed Synopsis Values retstr Return value as a string name Name of the probe point | [
"signal.check_ignored.return"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-check-ignored-return |
Chapter 3. Upgrading Red Hat Enterprise Linux on Satellite or Capsule | Chapter 3. Upgrading Red Hat Enterprise Linux on Satellite or Capsule Satellite and Capsule are supported on both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9. You can use the following methods to upgrade your Satellite or Capsule operating system from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9: Leapp in-place upgrade With Leapp, you can upgrade your Satellite or Capsule in-place therefore it is faster but imposes a downtime on the services. Migration by using cloning The Red Hat Enterprise Linux 8 system remains operational during the migration using cloning, which reduces the downtime. You cannot use cloning for Capsule Server migrations. Migration by using backup and restore The Red Hat Enterprise Linux 8 system remains operational during the migration using cloning, which reduces the downtime. You can use backup and restore for migrating both Satellite and Capsule operating system from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9. 3.1. Upgrading Satellite or Capsule to RHEL 9 in-place by using Leapp You can use the Leapp tool to upgrade as well as to help detect and resolve issues that could prevent you from upgrading successfully. Prerequisites Review known issues before you begin an upgrade. For more information, see Known issues in Red Hat Satellite 6.16 . If you use an HTTP proxy in your environment, configure the Subscription Manager to use the HTTP proxy for connection. For more information, see Troubleshooting in Upgrading from RHEL 8 to RHEL 9 . Satellite 6.16 or Capsule 6.16 running on Red Hat Enterprise Linux 8. If you are upgrading Capsule Servers, enable and synchronize the following repositories to Satellite Server, and add them to the lifecycle environment and content view that is attached to your Capsule Server: Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) : rhel-9-for-x86_64-baseos-rpms for the major version: x86_64 9 . rhel-9-for-x86_64-baseos-rpms for the latest supported minor version: x86_64 9. Y , where Y represents the minor version. For information about the latest supported minor version for in-place upgrades, see Supported upgrade paths in Upgrading from RHEL 8 to RHEL 9 . Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) : rhel-9-for-x86_64-appstream-rpms for the major version: x86_64 9 . rhel-9-for-x86_64-appstream-rpms for the latest supported minor version: x86_64 9. Y , where Y represents the minor version. For information about the latest supported minor versions for in-place upgrades, see Supported upgrade paths in Upgrading from RHEL 8 to RHEL 9 . Red Hat Satellite Capsule 6.16 for RHEL 9 x86_64 RPMs : satellite-capsule-6.16-for-rhel-9-x86_64-rpms Red Hat Satellite Maintenance 6.16 for RHEL 9 x86_64 RPMs : satellite-maintenance-6.16-for-rhel-9-x86_64-rpms Procedure Install required packages: Let Leapp analyze your system: The first run will most likely report issues and inhibit the upgrade. Examine the report in the /var/log/leapp/leapp-report.txt file, answer all questions by using leapp answer , and manually resolve other reported problems. Run leapp preupgrade again and make sure that it does not report any more issues. Let Leapp create the upgrade environment: Reboot the system to start the upgrade. After the system reboots, a live system conducts the upgrade, reboots to fix SELinux labels and then reboots into the final Red Hat Enterprise Linux 9 system. Wait for Leapp to finish the upgrade. You can monitor the process with journalctl : Unlock packages: Verify the post-upgrade state. For more information, see Verifying the post-upgrade state in Upgrading from RHEL 8 to RHEL 9 . Perform post-upgrade tasks on the RHEL 9 system. For more information, see Performing post-upgrade tasks on the RHEL 9 system in Upgrading from RHEL 8 to RHEL 9 . Lock packages: Change SELinux to enforcing mode. For more information, see Changing SELinux mode to enforcing in Upgrading from RHEL 8 to RHEL 9 . Unset the subscription-manager release: Additional resources For more information on customizing the Leapp upgrade for your environment, see Customizing your Red Hat Enterprise Linux in-place upgrade . 3.2. Migrating Satellite to RHEL 9 by using cloning You can clone your existing Satellite Server from Red Hat Enterprise Linux 8 to a freshly installed Red Hat Enterprise Linux 9 system. Create a backup of the existing Satellite Server, which you then clone on the new Red Hat Enterprise Linux 9 system. Note You cannot use cloning for Capsule Server backups. Procedure Perform a full backup of your Satellite Server. This is the source Red Hat Enterprise Linux 8 server that you are migrating. For more information, see Performing a full backup of Satellite Server in Administering Red Hat Satellite . Deploy a system with Red Hat Enterprise Linux 9 and the same configuration as the source server. This is the target server. Clone the server. Clone configures hostname for the target server. For more information, see Cloning Satellite Server in Administering Red Hat Satellite 3.3. Migrating Satellite or Capsule to RHEL 9 using backup and restore You can migrate your existing Satellite Server and Capsule Server from Red Hat Enterprise Linux 8 to a freshly installed Red Hat Enterprise Linux 9 system. The migration involves creating a backup of the existing Satellite Server and Capsule Server, which you then restore on the new Red Hat Enterprise Linux 9 system. Procedure Perform a full backup of your Satellite Server or Capsule. This is the source Red Hat Enterprise Linux 8 server that you are migrating. For more information, see Performing a full backup of Satellite Server or Capsule Server in Administering Red Hat Satellite . Deploy a system with Red Hat Enterprise Linux 9 and the same hostname and configuration as the source server. This is the target server. Restore the backup. Restore does not significantly alter the target system and requires additional configuration. For more information, see Restoring Satellite Server or Capsule Server from a backup in Administering Red Hat Satellite . Restore the Capsule Server backup. For more information, see Restoring Satellite Server or Capsule Server from a backup in Administering Red Hat Satellite . | [
"satellite-maintain packages install leapp leapp-upgrade-el8toel9",
"leapp preupgrade",
"leapp upgrade",
"journalctl -u leapp_resume -f",
"satellite-maintain packages unlock",
"satellite-maintain packages lock",
"subscription-manager release --unset"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_connected_red_hat_satellite_to_6.16/upgrading_EL_on_satellite_or_proxy_upgrading-connected |
Chapter 10. Troubleshooting Installation on an Intel or AMD System | Chapter 10. Troubleshooting Installation on an Intel or AMD System This section discusses some common installation problems and their solutions. For debugging purposes, anaconda logs installation actions into files in the /tmp directory. These files include: /tmp/anaconda.log general anaconda messages /tmp/program.log all external programs run by anaconda /tmp/storage.log extensive storage module information /tmp/yum.log yum package installation messages /tmp/syslog hardware-related system messages If the installation fails, the messages from these files are consolidated into /tmp/anaconda-tb- identifier , where identifier is a random string. All of the files above reside in the installer's ramdisk and are thus volatile. To make a permanent copy, copy those files to another system on the network using scp on the installation image (not the other way round). 10.1. You Are Unable to Boot Red Hat Enterprise Linux 10.1.1. Are You Unable to Boot With Your RAID Card? If you have performed an installation and cannot boot your system properly, you may need to reinstall and create your partitions differently. Some BIOS types do not support booting from RAID cards. At the end of an installation, a text-based screen showing the boot loader prompt (for example, GRUB: ) and a flashing cursor may be all that appears. If this is the case, you must repartition your system. Whether you choose automatic or manual partitioning, you must install your /boot partition outside of the RAID array, such as on a separate hard drive. An internal hard drive is necessary to use for partition creation with problematic RAID cards. You must also install your preferred boot loader (GRUB or LILO) on the MBR of a drive that is outside of the RAID array. This should be the same drive that hosts the /boot/ partition. Once these changes have been made, you should be able to finish your installation and boot the system properly. 10.1.2. Is Your System Displaying Signal 11 Errors? A signal 11 error, commonly known as a segmentation fault , means that the program accessed a memory location that was not assigned to it. A signal 11 error may be due to a bug in one of the software programs that is installed, or faulty hardware. If you receive a fatal signal 11 error during your installation, it is probably due to a hardware error in memory on your system's bus. Like other operating systems, Red Hat Enterprise Linux places its own demands on your system's hardware. Some of this hardware may not be able to meet those demands, even if they work properly under another OS. Ensure that you have the latest installation updates and images. Review the online errata to see if newer versions are available. If the latest images still fail, it may be due to a problem with your hardware. Commonly, these errors are in your memory or CPU-cache. A possible solution for this error is turning off the CPU-cache in the BIOS, if your system supports this. You could also try to swap your memory around in the motherboard slots to check if the problem is either slot or memory related. Another option is to perform a media check on your installation DVD. Anaconda , the installation program, has the ability to test the integrity of the installation media. It works with the DVD, hard drive ISO, and NFS ISO installation methods. Red Hat recommends that you test all installation media before starting the installation process, and before reporting any installation-related bugs (many of the bugs reported are actually due to improperly-burned DVDs). To use this test, type the following command at the boot: or yaboot: prompt: For more information concerning signal 11 errors, refer to: 10.1.3. Diagnosing Early Boot Problems The boot console may be useful in cases where your system fails to boot, but does successfully display the GRUB boot menu. Messages in the boot console can inform you of the current kernel version, command line parameters which have been passed to the kernel from the boot menu, enabled hardware support for the current kernel, physical memory map and other information which may help you find the cause of your problems. To enable the boot console, select an entry in the GRUB boot menu, and press e to edit boot options. On the line starting with the keyword kernel (or linux in some cases), append the following: On a system with BIOS firmware, append earlyprintk=vga,keep . Boot console messages should then be displayed on the system display. On a system with UEFI firmware, append earlyprintk=efi,keep . Boot console messages should then be displayed in the EFI frame buffer. You can also append the quiet option (if not present already) to suppress all other messages and only display messages from the boot console. Note The earlyprintk options for BIOS and UEFI should also be enabled in the kernel's /boot/config- version file - the CONFIG_EARLY_PRINTK= and CONFIG_EARLY_PRINTK_EFI= options must be set to the y value. They are enabled by default, but if you disabled them, you may need to mount the /boot partition in rescue mode and edit the configuration file to re-enable them. | [
"linux mediacheck",
"http://www.bitwizard.nl/sig11/"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-trouble-x86 |
Chapter 3. Assessing upgrade suitability | Chapter 3. Assessing upgrade suitability The Preupgrade Assistant assesses your system for any potential problems that might occur during an in-place upgrade before any changes to your system are made. The Preupgrade Assistant does the following: Leaves your system unchanged except for storing information or logs. It does not modify the assessed system. Assesses the system for possible in-place upgrade limitations, such as package removals, incompatible obsoletes, name changes, or deficiencies in some configuration file compatibilities. Provides a report with the assessment result. Provides post-upgrade scripts to address more complex problems after the in-place upgrade. You should run the Preupgrade Assistant multiple times. Always run the Preupgrade Assistant after you resolve problems identified by the pre-upgrade report to ensure that no critical problems remain before performing the upgrade. You can review the system assessment results using one of the following methods: Locally on the assessed system using the command line. Remotely over the network using the web user interface (UI). You can use the web UI to view multiple reports at once. Important The Preupgrade Assistant is a modular system. You can create your own custom modules to assess the possibility of performing an in-place upgrade. For more information, see How to create custom Preupgrade Assistant modules for upgrading from RHEL 6 to RHEL 7 . 3.1. Assessing upgrade suitability from the command line Viewing a Preupgrade Assistant report locally ensures that you do not expose the data about your system to the network. The pre-upgrade assessment results can be viewed locally using the following methods: As result codes in the standard output on the command line. As a detailed HTML file in a web browser. When the preupg command is run without further options, it produces the result.html and preupg_results-*.tar.gz files in the /root/preupgrade/ directory. Prerequisites You have completed the preparation steps described in Preparing a RHEL 6 system for the upgrade . Procedure Run the Preupgrade Assistant to perform an assessment of the system. Review each assessment result entry: Inspect result codes on the standard output For more information about assessment codes, see the Assessment result codes table . View the assessment report in greater detail by opening the HTML file with results in a web browser: View the README file in the /root/preupgrade/ directory for more information about the output directory structure, exit codes, and risk explanations associated with the Preupgrade Assistant utility. Resolve problems found by the Preupgrade Assistant during the assessment by following the Remediation text in the report. Important The assessment report might require you to perform certain tasks after you have completed the in-place upgrade to RHEL 7. Take note of these post-upgrade tasks and perform them after the upgrade. Run the Preupgrade Assistant again. If there are no new problems to be resolved, you can proceed with upgrading your system. 3.2. Assessing upgrade suitability from a web UI The Preupgrade Assistant browser-based interface can collect assessment reports from multiple systems and provides convenient filtering of the results. Because the upgrade procedure does not allow upgrading the GNOME desktop, this procedure gives you a way to display the Preupgrade Assistant results on a remote GUI desktop. Important To use the Preupgrade Assistant web UI remotely, you must install and configure the Apache HTTP Server , add files to the /etc/httpd/conf.d/ directory and run the httpd service on the system to serve the content. If you are concerned about exposing the data about your system to the network, or if you want to avoid adding content to the system you are upgrading, you can review the pre-upgrade assessment results using the following alternative methods: Locally using the Preupgrade Assistant web UI on localhost (127.0.0.1) without configuring the Apache HTTP Server. Remotely following the procedure described in Assessing upgrade suitability from the command line , copying the /root/preupgrade/result.html file to a remote system, and opening the HTML file in a web browser in the remote system. Prerequisites You have completed the preparation steps described in Preparing a RHEL 6 system for the upgrade . Procedure Install the Apache HTTP Server and the Preupgrade Assistant web UI: To make the Preupgrade Assistant web UI available to all network interfaces on the local system through TCP port 8099 by default, change the default private httpd pre-upgrade configuration to the public configuration: Optional: To access the Preupgrade Assistant using a host name instead of an IP address, for example, preupg-ui.example.com : Ensure you have a DNS CNAME record pointing the preupg-ui.example.com name to the system you are upgrading. Change the NameVirtualHost line in the 99-preup-httpd.conf file to NameVirtualHost preupg-ui.example.com:8099 . If you have a firewall running and SELinux in enforcing mode, allow access to the port needed by the Preupgrade Assistant web UI service: Restart the httpd service to load the new configuration. From a web browser on another system, access the Preupgrade Assistant web UI service by using either an IP address (for example, http://192.168.122.159:8099 ) or a hostname (for example, http://preupg-ui.example.com:8099 ). When accessing the Preupgrade Assistant web UI for the first time, decide whether to access the UI with or without authentication. To access the UI with authentication, log in as an existing user or create a new one. When you select Submit to create a new user, the authentication system is automatically enabled. To access the UI without authentication, select Disable Authentication . Return to the system you plan to upgrade and run the Preupgrade Assistant in the command line with an automatic submission to the Preupgrade Assistant web UI server: For example: Return to your web browser on the remote server and reload the Preupgrade Assistant Web UI. In the web UI, find and expand the assessment report that you generated by running the Preupgrade Assistant. Go through each item in the report and resolve the reported problems. For information about assessment result codes, see the Assessment result codes table . Important The assessment report might require you to perform certain tasks after you have completed the in-place upgrade to RHEL 7. Take note of these post-upgrade tasks and perform them after the upgrade. Run the Preupgrade Assistant again and upload the report to the web UI. If there are no new problems to be resolved, you can proceed with the upgrade. 3.3. Pre-upgrade assessment result codes When you run the Preupgrade Assistant, an assessment result is generated. Each result in the assessment is assigned a code. Refer to the table below for an explanation of each code and a potential action to take. Table 3.1. Pre-upgrade assessment result codes Result code Explanation PASS No problems found. FAIL Extreme upgrade risk. In-place upgrade is impossible. NEEDS_ACTION High upgrade risk. You must resolve the problem before running the Red Hat Upgrade Tool. NEEDS_INSPECTION Medium or lower upgrade risks. The upgrade might not fail, but it might result in a system that is not fully operational. You must check certain parts of the system and, if needed, fix the problems. FIXED Changes required for the upgrade were applied automatically. You do not need to perform any action. INFORMATIONAL Useful, but not critical, information. NOT_APPLICABLE The assessed package is not installed on your system. ERROR An error occurred in the tooling. Report this type of problem to Red Hat Support. notchecked The respective module has not been checked. See Known issues for more details. | [
"preupg",
"web_browser file:///root/preupgrade/result.html",
"yum install httpd preupgrade-assistant-ui",
"cp /etc/httpd/conf.d/99-preup-httpd.conf.public /etc/httpd/conf.d/99-preup-httpd.conf",
"setsebool httpd_run_preupgrade on iptables -I INPUT -m state --state NEW -p tcp --dport 8099 -j ACCEPT",
"service httpd restart",
"preupg -u http:// hostname :port/submit/",
"preupg -u http://preupg-ui.example.com:8099/submit/"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/upgrading_from_rhel_6_to_rhel_7/assessing-upgrade-suitability_upgrading-from-rhel-6-to-rhel-7 |
8.2. sVirt Labeling | 8.2. sVirt Labeling Like other services under the protection of SELinux, sVirt uses process-based mechanisms and restrictions to provide an extra layer of security over guest instances. Under typical use, you should not even notice that sVirt is working in the background. This section describes the labeling features of sVirt. As shown in the following output, when using sVirt, each Virtual Machine (VM) process is labeled and runs with a dynamically generated level. Each process is isolated from other VMs with different levels: The actual disk images are automatically labeled to match the processes, as shown in the following output: The following table outlines the different labels that can be assigned when using sVirt: Table 8.1. sVirt Labels Type SELinux Context Description Virtual Machine Processes system_u:system_r:svirt_t:MCS1 MCS1 is a randomly selected MCS field. Currently approximately 500,000 labels are supported. Virtual Machine Image system_u:object_r:svirt_image_t:MCS1 Only processes labeled svirt_t with the same MCS fields are able to read/write these image files and devices. Virtual Machine Shared Read/Write Content system_u:object_r:svirt_image_t:s0 All processes labeled svirt_t are allowed to write to the svirt_image_t:s0 files and devices. Virtual Machine Image system_u:object_r:virt_content_t:s0 System default label used when an image exits. No svirt_t virtual processes are allowed to read files/devices with this label. It is also possible to perform static labeling when using sVirt. Static labels allow the administrator to select a specific label, including the MCS/MLS field, for a virtual machine. Administrators who run statically-labeled virtual machines are responsible for setting the correct label on the image files. The virtual machine will always be started with that label, and the sVirt system will never modify the label of a statically-labeled virtual machine's content. This allows the sVirt component to run in an MLS environment. You can also run multiple virtual machines with different sensitivity levels on a system, depending on your requirements. | [
"~]# ps -eZ | grep qemu system_u:system_r:svirt_t:s0:c87,c520 27950 ? 00:00:17 qemu-kvm system_u:system_r:svirt_t:s0:c639,c757 27989 ? 00:00:06 qemu-system-x86",
"~]# ls -lZ /var/lib/libvirtimages/* system_u:object_r:svirt_image_t:s0:c87,c520 image1"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sec-security-enhanced_linux-svirt_labeling |
3.5. Managing Groups via Command-Line Tools | 3.5. Managing Groups via Command-Line Tools Groups are a useful tool for permitting co-operation between different users. There is a set of commands for operating with groups such as groupadd , groupmod , groupdel , or gpasswd . The files affected include /etc/group which stores group account information and /etc/gshadow , which stores secure group account information. 3.5.1. Creating Groups To add a new group to the system with default settings, the groupadd command is run at the shell prompt as root . Example 3.18. Creating a Group with Default Settings The groupadd command creates a new group called friends . You can read more information about the group from the newly-created line in the /etc/group file: Automatically, the group friends is attached with a unique GID (group ID) of 30005 and is not attached with any users. Optionally, you can set a password for a group by running gpasswd groupname . Alternatively, you can add command options with specific settings. If you, for example, want to specify the numerical value of the group's ID (GID) when creating the group, run the groupadd command with the -g option. Remember that this value must be unique (unless the -o option is used) and the value must be non-negative. Example 3.19. Creating a Group with Specified GID The command below creates a group named schoolmates and sets GID of 60002 for it: When used with -g and GID already exists, groupadd refuses to create another group with existing GID. As a workaround, use the -f option, with which groupadd creates a group, but with a different GID . You may also create a system group by attaching the -r option to the groupadd command. System groups are used for system purposes, which practically means that GID is allocated from 1 to 499 within the reserved range of 999. For more information on groupadd , see the groupadd(8) man pages. | [
"groupadd group_name",
"~]# groupadd friends",
"classmates:x:30005:",
"groupadd option(s) groupname",
"groupadd -g GID",
"~]# groupadd -g 60002 schoolmates",
"groupadd -f GID",
"groupadd -r group_name"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-groups-cl-tools |
Chapter 3. Block pools | Chapter 3. Block pools The OpenShift Data Foundation operator installs a default set of storage pools depending on the platform in use. These default storage pools are owned and controlled by the operator and cannot be deleted or modified. With OpenShift Container Platform, you can create multiple custom storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple block pools are not supported for external mode OpenShift Data Foundation clusters. 3.1. Creating a block pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage Data Foundation . In the Storage systems tab, select the storage system and then click the BlockPools tab. Click Create Block Pool . Enter Pool name . Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Select Data protection policy as either 2-way Replication or 3-way Replication . Optional: Select Enable compression checkbox if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Create . 3.2. Updating an existing pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage Data Foundation . In the Storage systems tab, select the storage system and then click BlockPools . Click the Action Menu (...) at the end the pool you want to update. Click Edit Block Pool . Modify the form details as follows: Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Change the Data protection policy to either 2-way Replication or 3-way Replication . Enable or disable the compression option. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Save . 3.3. Deleting a pool Use this procedure to delete a pool in OpenShift Data Foundation. Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure . Click Storage Data Foundation . In the Storage systems tab, select the storage system and then click the BlockPools tab. Click the Action Menu (...) at the end the pool you want to delete. Click Delete Block Pool . Click Delete to confirm the removal of the Pool. Note A pool cannot be deleted when it is bound to a PVC. You must detach all the resources before performing this activity. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/block-pools_rhodf |
Chapter 12. Configuring AWS STS for Red Hat Quay | Chapter 12. Configuring AWS STS for Red Hat Quay Support for Amazon Web Services (AWS) Security Token Service (STS) is available for standalone Red Hat Quay deployments and Red Hat Quay on OpenShift Container Platform. AWS STS is a web service for requesting temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users and for users that you authenticate, or federated users . This feature is useful for clusters using Amazon S3 as an object storage, allowing Red Hat Quay to use STS protocols to authenticate with Amazon S3, which can enhance the overall security of the cluster and help to ensure that access to sensitive data is properly authenticated and authorized. Configuring AWS STS is a multi-step process that requires creating an AWS IAM user, creating an S3 role, and configuring your Red Hat Quay config.yaml file to include the proper resources. Use the following procedures to configure AWS STS for Red Hat Quay. 12.1. Creating an IAM user Use the following procedure to create an IAM user. Procedure Log in to the Amazon Web Services (AWS) console and navigate to the Identity and Access Management (IAM) console. In the navigation pane, under Access management click Users . Click Create User and enter the following information: Enter a valid username, for example, quay-user . For Permissions options , click Add user to group . On the review and create page, click Create user . You are redirected to the Users page. Click the username, for example, quay-user . Copy the ARN of the user, for example, arn:aws:iam::123492922789:user/quay-user . On the same page, click the Security credentials tab. Navigate to Access keys . Click Create access key . On the Access key best practices & alternatives page, click Command Line Interface (CLI) , then, check the confirmation box. Then click . Optional. On the Set description tag - optional page, enter a description. Click Create access key . Copy and store the access key and the secret access key. Important This is the only time that the secret access key can be viewed or downloaded. You cannot recover it later. However, you can create a new access key any time. Click Done . 12.2. Creating an S3 role Use the following procedure to create an S3 role for AWS STS. Prerequisites You have created an IAM user and stored the access key and the secret access key. Procedure If you are not already, navigate to the IAM dashboard by clicking Dashboard . In the navigation pane, click Roles under Access management . Click Create role . Click Custom Trust Policy , which shows an editable JSON policy. By default, it shows the following information: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": {}, "Action": "sts:AssumeRole" } ] } Under the Principal configuration field, add your AWS ARN information. For example: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123492922789:user/quay-user" }, "Action": "sts:AssumeRole" } ] } Click . On the Add permissions page, type AmazonS3FullAccess in the search box. Check the box to add that policy to the S3 role, then click . On the Name, review, and create page, enter the following information: Enter a role name, for example, example-role . Optional. Add a description. Click the Create role button. You are navigated to the Roles page. Under Role name , the newly created S3 should be available. 12.3. Configuring Red Hat Quay to use AWS STS Use the following procedure to edit your Red Hat Quay config.yaml file to use AWS STS. Procedure Update your config.yaml file for Red Hat Quay to include the following information: # ... DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6 # ... 1 The unique Amazon Resource Name (ARN) required when configuring AWS STS 2 The name of your s3 bucket. 3 The storage path for data. Usually /datastorage . 4 Optional. The Amazon Web Services region. Defaults to us-east-1 . 5 The generated AWS S3 user access key required when configuring AWS STS. 6 The generated AWS S3 user secret key required when configuring AWS STS. Restart your Red Hat Quay deployment. Verification Tag a sample image, for example, busybox , that will be pushed to the repository. For example: USD podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test Push the sample image by running the following command: USD podman push <quay-server.example.com>/<organization_name>/busybox:test Verify that the push was successful by navigating to the Organization that you pushed the image to in your Red Hat Quay registry Tags . Navigate to the Amazon Web Services (AWS) console and locate your s3 bucket. Click the name of your s3 bucket. On the Objects page, click datastorage/ . On the datastorage/ page, the following resources should seen: sha256/ uploads/ These resources indicate that the push was successful, and that AWS STS is properly configured. | [
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": {}, \"Action\": \"sts:AssumeRole\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::123492922789:user/quay-user\" }, \"Action\": \"sts:AssumeRole\" } ] }",
"DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6",
"podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test",
"podman push <quay-server.example.com>/<organization_name>/busybox:test"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/configuring-aws-sts-quay |
Chapter 1. Configuring and deploying a Red Hat OpenStack Platform hyperconverged infrastructure | Chapter 1. Configuring and deploying a Red Hat OpenStack Platform hyperconverged infrastructure Red Hat OpenStack Platform (RHOSP) hyperconverged infrastructures (HCI) consist of hyperconverged nodes. Services are colocated on these hyperconverged nodes for optimized resource usage. In a RHOSP HCI, the Compute and storage services are colocated on hyperconverged nodes. You can deploy an overcloud with only hyperconverged nodes, or a mixture of hyperconverged nodes with normal Compute and Ceph Storage nodes. Note You must use Red Hat Ceph Storage as the storage provider. Tip Use ceph-ansible 3.2 and later to automatically tune Ceph memory settings. Use BlueStore as the back end for HCI deployments, to make use of the BlueStore memory handling features. To create and deploy HCI on an overcloud, integrate with other features in your overcloud such as Network Function Virtualization, and ensure optimal performance of both Compute and Red Hat Ceph Storage services on hyperconverged nodes, you must complete the following: Prepare the predefined custom overcloud role for hyperconverged nodes, ComputeHCI . Configure resource isolation. Verify the available Red Hat Ceph Storage packages. Deploy the HCI overcloud. For HCI configuration guidance, see Configuration guidance . 1.1. Prerequisites You have deployed the undercloud. For instructions on how to deploy the undercloud, see Director Installation and Usage . Your environment can provision nodes that meet RHOSP Compute and Red Hat Ceph Storage requirements. For more information, see Basic Overcloud Deployment . You have registered all nodes in your environment. For more information, see Registering Nodes . You have tagged all nodes in your environment. For more information, see Manually Tagging the Nodes . You have cleaned the disks on nodes that you plan to use for Compute and Ceph OSD services. For more information, see Cleaning Ceph Storage Node Disks . You have prepared your overcloud nodes for registration with the Red Hat Content Delivery Network or a Red Hat Satellite server. For more information, see Ansible-based Overcloud Registration . 1.2. Preparing the overcloud role for hyperconverged nodes To designate nodes as hyperconverged, you need to define a hyperconverged role. Red Hat OpenStack Platform (RHOSP) provides the predefined role ComputeHCI for hyperconverged nodes. This role colocates the Compute and Ceph object storage daemon (OSD) services, allowing you to deploy them together on the same hyperconverged node. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new custom roles data file that includes the ComputeHCI role, along with other roles you intend to use for the overcloud. The following example generates the roles data file roles_data_hci.yaml that includes the roles Controller , ComputeHCI , Compute , and CephStorage : Note The networks listed for the ComputeHCI role in the generated custom roles data file include the networks required for both Compute and Storage services, for example: Create a local copy of the network_data.yaml file to add a composable network to your overcloud. The network_data.yaml file interacts with the default network environment files, /usr/share/openstack-tripleo-heat-templates/environments/* , to associate the networks you defined for your ComputeHCI role with the hyperconverged nodes. For more information, see Adding a composable network in the Advanced Overcloud Customization guide. To improve the performance of Red Hat Ceph Storage, update the MTU setting for both the Storage and StorageMgmt networks to 9000 , for jumbo frames, in your local copy of network_data.yaml . For more information, see Configuring MTU Settings in Director and Configuring jumbo frames . Create the computeHCI overcloud flavor for hyperconverged nodes: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Note These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size. Retrieve a list of your nodes to identify their UUIDs: Tag each bare metal node that you want to designate as hyperconverged with a custom HCI resource class: Replace <node> with the ID of the bare metal node. Associate the computeHCI flavor with the custom HCI resource class: To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances: Add the following parameters to the node-info.yaml file to specify the number of hyperconverged and Controller nodes, and the flavor to use for the hyperconverged and controller designated nodes: Additional resources Composable Services and Custom Roles Examining the roles_data file Assigning Nodes and Flavors to Roles 1.2.1. Defining the root disk for multi-disk clusters Director must identify the root disk during provisioning in the case of nodes with multiple disks. For example, most Ceph Storage nodes use multiple disks. By default, director writes the overcloud image to the root disk during the provisioning process. There are several properties that you can define to help director identify the root disk: model (String): Device identifier. vendor (String): Device vendor. serial (String): Disk serial number. hctl (String): Host:Channel:Target:Lun for SCSI. size (Integer): Size of the device in GB. wwn (String): Unique storage identifier. wwn_with_extension (String): Unique storage identifier with the vendor extension appended. wwn_vendor_extension (String): Unique vendor storage identifier. rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD). name (String): The name of the device, for example: /dev/sdb1. Important Use the name property only for devices with persistent names. Do not use name to set the root disk for any other devices because this value can change when the node boots. You can specify the root device using its serial number. Procedure Check the disk information from the hardware introspection of each node. Run the following command to display the disk information of a node: For example, the data for one node might show three disks: Enter openstack baremetal node set --property root_device= to set the root disk for a node. Include the most appropriate hardware attribute value to define the root disk. For example, to set the root device to disk 2, which has the serial number 61866da04f380d001ea4e13c12e36ad6 , enter the following command: Note Ensure that you configure the BIOS of each node to include booting from the root disk that you choose. Configure the boot order to boot from the network first, then to boot from the root disk. Director identifies the specific disk to use as the root disk. When you run the openstack overcloud deploy command, director provisions and writes the overcloud image to the root disk. 1.3. Configuring resource isolation on hyperconverged nodes Colocating Ceph OSD and Compute services on hyperconverged nodes risks resource contention between Red Hat Ceph Storage and Compute services, as neither are aware of each other's presence on the same host. Resource contention can result in degradation of service, which offsets the benefits of hyperconvergence. You must configure resource isolation for both Ceph and Compute services to prevent contention. Procedure Optional: Override the autogenerated Compute settings by adding the following parameters to a Compute environment file: Replace <ram> with the amount of RAM to reserve for the Ceph OSD services and instance overhead on hyperconverged nodes, in MB. Replace <ratio> with the ratio that the Compute scheduler should use when choosing which Compute node to deploy an instance on. For more information on the autogenerated Compute settings, see Process for autogenerating CPU and memory resources to reserve for the Compute service . To reserve memory resources for Red Hat Ceph Storage, set the parameter is_hci to true in /home/stack/templates/storage-container-config.yaml : This allows ceph-ansible to reserve memory resources for Red Hat Ceph Storage, and reduce memory growth by Ceph OSDs, by automatically adjusting the osd_memory_target parameter setting for a HCI deployment. Warning Red Hat does not recommend directly overriding the ceph_osd_docker_memory_limit parameter. Note As of ceph-ansible 3.2, the ceph_osd_docker_memory_limit is set automatically to the maximum memory of the host, as discovered by Ansible, regardless of whether the FileStore or BlueStore back end is used. Optional: By default, ceph-ansible reserves one vCPU for each Ceph OSD. If you require more than one CPU per Ceph OSD, add the following configuration to /home/stack/templates/storage-container-config.yaml : Replace <cpu_limit> with the number of CPUs to reserve for each Ceph OSD. For more information on how to tune CPU resources based on your hardware and workload, see Red Hat Ceph Storage Hardware Selection Guide . Optional: Reduce the priority of Red Hat Ceph Storage backfill and recovery operations when a Ceph OSD is removed by adding the following parameters to a Ceph environment file: Replace <priority_value> with the priority for recovery operations, relative to the OSD client OP priority. Replace <no_active_recovery_requests> with the number of active recovery requests per OSD, at one time. Replace <max_no_backfills> with the maximum number of backfills allowed to or from a single OSD. For more information on default Red Hat Ceph Storage backfill and recovery options, see Red Hat Ceph Storage backfill and recovery operations . 1.3.1. Process for autogenerating CPU and memory resources to reserve for the Compute service Director provides a default plan environment file for configuring resource constraints on hyperconverged nodes during deployment. This plan environment file instructs the OpenStack Workflow to complete the following processes: Retrieve the hardware introspection data collected during inspection of the hardware nodes. Calculate optimal CPU and memory allocation workload for Compute on hyperconverged nodes based on that data. Autogenerate the parameters required to configure those constraints and reserve CPU and memory resources for Compute. These parameters are defined under the hci_profile_config section of the plan-environment-derived-params.yaml file. Note The average_guest_memory_size_in_mb and average_guest_cpu_utilization_percentage parameters in each workload profile are used to calculate values for the reserved_host_memory and cpu_allocation_ratio settings of Compute. You can override the autogenerated Compute settings by adding the following parameters to your Compute environment file: Autogenerated nova.conf parameter Compute environment file override Description reserved_host_memory Sets how much RAM should be reserved for the Ceph OSD services and per-guest instance overhead on hyperconverged nodes. cpu_allocation_ratio Sets the ratio that the Compute scheduler should use when choosing which Compute node to deploy an instance on. These overrides are applied to all nodes that use the ComputeHCI role, namely, all hyperconverged nodes. For more information about manually determining optimal values for NovaReservedHostMemory and NovaCPUAllocationRatio , see OpenStack Workflow Compute CPU and memory calculator . Tip You can use the following script to calculate suitable baseline NovaReservedHostMemory and NovaCPUAllocationRatio values for your hyperconverged nodes. nova_mem_cpu_calc.py Additional resources Creating an inventory of the bare-metal node hardware 1.3.2. Red Hat Ceph Storage backfill and recovery operations When a Ceph OSD is removed, Red Hat Ceph Storage uses backfill and recovery operations to rebalance the cluster. Red Hat Ceph Storage does this to keep multiple copies of data according to the placement group policy. These operations use system resources. If a Red Hat Ceph Storage cluster is under load, its performance drops as it diverts resources to backfill and recovery. To mitigate this performance effect during OSD removal, you can reduce the priority of backfill and recovery operations. The trade off for this is that there are less data replicas for a longer time, which puts the data at a slightly greater risk. The parameters detailed in the following table are used to configure the priority of backfill and recovery operations. Parameter Description Default value osd_recovery_op_priority Sets the priority for recovery operations, relative to the OSD client OP priority. 3 osd_recovery_max_active Sets the number of active recovery requests per OSD, at one time. More requests accelerate recovery, but the requests place an increased load on the cluster. Set this to 1 if you want to reduce latency. 3 osd_max_backfills Sets the maximum number of backfills allowed to or from a single OSD. 1 1.4. Verifying available Red Hat Ceph Storage packages To help avoid overcloud deployment failures, verify that the required packages exist on your servers. 1.4.1. Verifying the ceph-ansible package version The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen. Procedure Verify that the ceph-ansible package version you want is installed: 1.4.2. Verifying packages for pre-provisioned nodes Red Hat Ceph Storage (RHCS) can service only overcloud nodes that have a certain set of packages. When you use pre-provisioned nodes, you can verify the presence of these packages. For more information about pre-provisioned nodes, see Configuring a basic overcloud with pre-provisioned nodes . Procedure Verify that the pre-provisioned nodes contain the required packages: 1.5. Deploying the HCI overcloud You must deploy the overcloud after you complete the HCI configuration. Important Do not enable Instance HA when you deploy a Red Hat OpenStack Platform (RHOSP) HCI environment. Contact your Red Hat representative if you want to use Instance HA with hyperconverged RHOSP deployments with Red Hat Ceph Storage. Prerequisites You are using a separate base environment file, or set of files, for all other Red Hat Ceph Storage settings, for example, /home/stack/templates/storage-config.yaml . For more information, see Customizing the Storage service and Appendix A. Sample environment file: creating a Ceph Storage cluster . You have defined the number of nodes you are assigning to each role in the base environment file. For more information, see Assigning nodes and flavors to roles . During undercloud installation, you set generate_service_certificate=false in the undercloud.conf file. Otherwise, you must inject a trust anchor when you deploy the overcloud, as described in Enabling SSL/TLS on Overcloud Public Endpoints . Procedure Add your new role and environment files to the stack with your other environment files and deploy your HCI overcloud: Including /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml in the deployment command adds the base environment file that deploys a containerized Red Hat Ceph cluster, with all default settings. For more information, see Deploying an Overcloud with Containerized Red Hat Ceph . Note Include the following options in the deployment command if your deployment uses single root input/output virtualization (SR-IOV). If you use the ML2/OVS mechanism driver in your deployment, specify the following options: If you use the ML2/OVN mechanism driver in your deployment, specify the following options: Tip You can also use an answers file to specify which environment files to include in your deployment. For more information, see Including environment files in an overcloud deployment in the Director Installation and Usage guide. 1.5.1. Limiting the nodes on which ceph-ansible runs You can reduce deployment update time by limiting the nodes where ceph-ansible runs. When Red Hat OpenStack Platform (RHOSP) uses config-download to configure Ceph, you can use the --limit option to specify a list of nodes, instead of running config-download and ceph-ansible across your entire deployment. This feature is useful, for example, as part of scaling up your overcloud, or replacing a failed disk. In these scenarios, the deployment can run only on the new nodes that you add to the environment. Example scenario that uses --limit in a failed disk replacement In the following example procedure, the Ceph storage node oc0-cephstorage-0 has a disk failure so it receives a new factory clean disk. Ansible needs to run on the oc0-cephstorage-0 node so that the new disk can be used as an OSD but it does not need to run on all of the other Ceph storage nodes. Replace the example environment files and node names with those appropriate to your environment. Procedure Log in to the undercloud node as the stack user and source the stackrc credentials file: Complete one of the following steps so that the new disk is used to start the missing OSD. Run a stack update and include the --limit option to specify the nodes where you want ceph-ansible to run: In this example, the Controllers are included because the Ceph mons need Ansible to change their OSD definitions. If config-download has generated an ansible-playbook-command.sh script, you can also run the script with the --limit option to pass the specified nodes to ceph-ansible : Warning You must always include the undercloud in the limit list otherwise ceph-ansible cannot be executed when you use --limit . This is necessary because the ceph-ansible execution occurs through the external_deploy_steps_tasks playbook, which runs only on the undercloud. 1.6. OpenStack Workflow Compute CPU and memory calculator The OpenStack Workflow calculates the optimal settings for CPU and memory and uses the results to populate the parameters NovaReservedHostMemory and NovaCPUAllocationRatio . NovaReservedHostMemory The NovaReservedHostMemory parameter sets the amount of memory (in MB) to reserve for the host node. To determine an appropriate value for hyper-converged nodes, assume that each OSD consumes 3 GB of memory. Given a node with 256 GB memory and 10 OSDs, you can allocate 30 GB of memory for Ceph, leaving 226 GB for Compute. With that much memory a node can host, for example, 113 instances using 2 GB of memory each. However, you still need to consider additional overhead per instance for the hypervisor . Assuming this overhead is 0.5 GB, the same node can only host 90 instances, which accounts for the 226 GB divided by 2.5 GB. The amount of memory to reserve for the host node (that is, memory the Compute service should not use) is: (In * Ov) + (Os * RA) Where: In : number of instances Ov : amount of overhead memory needed per instance Os : number of OSDs on the node RA : amount of RAM that each OSD should have With 90 instances, this give us (90*0.5) + (10*3) = 75 GB. The Compute service expects this value in MB, namely 75000. The following Python code provides this computation: NovaCPUAllocationRatio The Compute scheduler uses NovaCPUAllocationRatio when choosing which Compute nodes on which to deploy an instance. By default, this is 16.0 (as in, 16:1). This means if there are 56 cores on a node, the Compute scheduler will schedule enough instances to consume 896 vCPUs on a node before considering the node unable to host any more. To determine a suitable NovaCPUAllocationRatio for a hyper-converged node, assume each Ceph OSD uses at least one core (unless the workload is I/O-intensive, and on a node with no SSD). On a node with 56 cores and 10 OSDs, this would leave 46 cores for Compute. If each instance uses 100 per cent of the CPU it receives, then the ratio would simply be the number of instance vCPUs divided by the number of cores; that is, 46 / 56 = 0.8. However, since instances do not normally consume 100 per cent of their allocated CPUs, you can raise the NovaCPUAllocationRatio by taking the anticipated percentage into account when determining the number of required guest vCPUs. So, if we can predict that instances will only use 10 per cent (or 0.1) of their vCPU, then the number of vCPUs for instances can be expressed as 46 / 0.1 = 460. When this value is divided by the number of cores (56), the ratio increases to approximately 8. The following Python code provides this computation: 1.7. Additional resources For more detailed information about the Red Hat OpenStack Platform (RHOSP), see the following guides: Director Installation and Usage : This guide provides guidance on the end-to-end deployment of a RHOSP environment, both undercloud and overcloud. Advanced Overcloud Customization : This guide describes how to configure advanced RHOSP features through the director, such as how to use custom roles. Deploying an Overcloud with Containerized Red Hat Ceph : This guide describes how to deploy an overcloud that uses Red Hat Ceph Storage as a storage provider. Networking Guide : This guide provides details on RHOSP networking tasks. | [
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_hci.yaml Controller ComputeHCI Compute CephStorage",
"- name: ComputeHCI description: | Compute node role hosting Ceph OSD tags: - compute networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet",
"(undercloud)USD openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> computeHCI",
"(undercloud)USD openstack baremetal node list",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.HCI <node>",
"(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_HCI=1 computeHCI",
"(undercloud)USD openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 computeHCI",
"parameter_defaults: OvercloudComputeHCIFlavor: computeHCI ComputeHCICount: 3 Controller: control ControllerCount: 3",
"(undercloud)USD openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq \".inventory.disks\"",
"[ { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sda\", \"wwn_vendor_extension\": \"0x1ea4dcc412a9632b\", \"wwn_with_extension\": \"0x61866da04f3807001ea4dcc412a9632b\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380700\", \"serial\": \"61866da04f3807001ea4dcc412a9632b\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdb\", \"wwn_vendor_extension\": \"0x1ea4e13c12e36ad6\", \"wwn_with_extension\": \"0x61866da04f380d001ea4e13c12e36ad6\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380d00\", \"serial\": \"61866da04f380d001ea4e13c12e36ad6\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdc\", \"wwn_vendor_extension\": \"0x1ea4e31e121cfb45\", \"wwn_with_extension\": \"0x61866da04f37fc001ea4e31e121cfb45\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f37fc00\", \"serial\": \"61866da04f37fc001ea4e31e121cfb45\" } ]",
"(undercloud)USD openstack baremetal node set --property root_device='{\"serial\":\"<serial_number>\"}' <node-uuid>",
"(undercloud)USD openstack baremetal node set --property root_device='{\"serial\": \"61866da04f380d001ea4e13c12e36ad6\"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0",
"parameter_defaults: ComputeHCIParameters: NovaReservedHostMemory: <ram> NovaCPUAllocationRatio: <ratio>",
"parameter_defaults: CephAnsibleExtraConfig: is_hci: true",
"parameter_defaults: CephAnsibleExtraConfig: ceph_osd_docker_cpu_limit: <cpu_limit>",
"parameter_defaults: CephConfigOverrides: osd_recovery_op_priority: <priority_value> osd_recovery_max_active: <no_active_recovery_requests> osd_max_backfills: <max_no_backfills>",
"parameter_defaults: ComputeHCIParameters: NovaReservedHostMemory: 181000",
"parameter_defaults: ComputeHCIParameters: NovaCPUAllocationRatio: 8.2",
"ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml",
"ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_hci.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/templates/storage-config.yaml -e /home/stack/templates/storage-container-config.yaml -n /home/stack/templates/network_data.yaml [-e /home/stack/templates/ceph-backfill-recovery.yaml \\ ] --ntp-server pool.ntp.org",
"-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml -e /home/stack/templates/network-environment.yaml",
"-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml -e /home/stack/templates/network-environment.yaml",
"source stackrc",
"openstack overcloud deploy --templates -r /home/stack/roles_data.yaml -n /usr/share/openstack-tripleo-heat-templates/network_data_dashboard.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e ~/my-ceph-settings.yaml -e <other-environment_files> --limit oc0-controller-0:oc0-controller-2:oc0-controller-1:oc0-cephstorage-0:undercloud",
"./ansible-playbook-command.sh --limit oc0-controller-0:oc0-controller-2:oc0-controller-1:oc0-cephstorage-0:undercloud",
"left_over_mem = mem - (GB_per_OSD * osds) number_of_guests = int(left_over_mem / (average_guest_size + GB_overhead_per_guest)) nova_reserved_mem_MB = MB_per_GB * ( (GB_per_OSD * osds) + (number_of_guests * GB_overhead_per_guest))",
"cores_per_OSD = 1.0 average_guest_util = 0.1 # 10% nonceph_cores = cores - (cores_per_OSD * osds) guest_vCPUs = nonceph_cores / average_guest_util cpu_allocation_ratio = guest_vCPUs / cores"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/hyperconverged_infrastructure_guide/assembly_configuring-and-deploying-rhosp-hci_osp-hci |
Chapter 10. Adding more RHEL compute machines to an OpenShift Container Platform cluster | Chapter 10. Adding more RHEL compute machines to an OpenShift Container Platform cluster If your OpenShift Container Platform cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it. 10.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.16, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. 10.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base operating system: Use RHEL 8.8 or a later version with the minimal installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=TRUE attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. For clusters installed on Microsoft Azure: Ensure the system includes the hardware requirement of a Standard_D8s_v3 virtual machine. Enable Accelerated Networking. Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. Additional resources Deleting nodes Accelerated Networking for Microsoft Azure VMs 10.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 10.3. Preparing an image for your cloud Amazon Machine Images (AMI) are required since various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You must list the AMI IDs so that the correct RHEL version needed for the compute machines is selected. 10.3.1. Listing latest available RHEL images on AWS AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs. Prerequisites You have installed the AWS CLI. Procedure Use this command to list RHEL 8.8 Amazon Machine Images (AMI): USD aws ec2 describe-images --owners 309956199498 \ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2 --filters "Name=name,Values=RHEL-8.8*" \ 3 --region us-east-1 \ 4 --output table 5 1 The --owners command option shows Red Hat images based on the account ID 309956199498 . Important This account ID is required to display AMI IDs for images that are provided by Red Hat. 2 The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' . In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. 3 The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-8.8*" , then RHEL 8.8 AMIs are shown. 4 The --region command option sets where the region where an AMI is stored. 5 The --output command option sets how the results are displayed. Note When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 8.8 or a later version of RHEL 8. Example output ------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+ Additional resources You may also manually import RHEL images to AWS . 10.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.16: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.16-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 10.5. Attaching the role permissions to RHEL instance in AWS Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. Procedure From the AWS IAM console, create your desired IAM role . Attach the IAM role to the desired worker node. Additional resources See Required AWS permissions for IAM roles . 10.6. Tagging a RHEL worker node as owned or shared A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster. The owned tag value should be added if the resource should be destroyed as part of destroying the cluster. The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. Procedure With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared . Note Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. 10.7. Adding more RHEL compute machines to your cluster You can add more compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.16 cluster. Prerequisites Your OpenShift Container Platform cluster already contains RHEL compute nodes. The hosts file that you used to add the first RHEL compute machines to your cluster is on the machine that you use the run the playbook. The machine that you run the playbook on must be able to access all of the RHEL hosts. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. The kubeconfig file for the cluster and the installation program that you used to install the cluster are on the machine that you use the run the playbook. You must prepare the RHEL hosts for installation. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. If you use SSH key-based authentication, you must manage the key with an SSH agent. Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Procedure Open the Ansible inventory file at /<path>/inventory/hosts that defines your compute machine hosts and required variables. Rename the [new_workers] section of the file to [workers] . Add a [new_workers] section to the file and define the fully-qualified domain names for each new host. The file resembles the following example: In this example, the mycluster-rhel8-0.example.com and mycluster-rhel8-1.example.com machines are in the cluster and you add the mycluster-rhel8-2.example.com and mycluster-rhel8-3.example.com machines. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the scaleup playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 10.8. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.9. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. | [
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.8*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.16-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_management/more-rhel-compute |
Building applications | Building applications OpenShift Container Platform 4.13 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view",
"oc project <project_name> 1",
"oc status",
"oc delete project <project_name> 1",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected].",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi EOD",
"postgrescluster.postgres-operator.crunchydata.com/hippo created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE hippo-backup-9rxm-88rzq 0/1 Completed 0 2m2s hippo-instance1-6psd-0 4/4 Running 0 3m28s hippo-repo-host-0 2/2 Running 0 3m28s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc expose service spring-petclinic -n my-petclinic",
"route.route.openshift.io/spring-petclinic exposed",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s",
"for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'USDi' -exec echo -n {}:\" \" \\; -exec cat {} \\;'; echo; done",
"/bindings/spring-petclinic-pgcluster/username: <username> /bindings/spring-petclinic-pgcluster/password: <password> /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"oc apply -f - << EOD --- apiVersion: v1 kind: Namespace metadata: name: my-petclinic --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: postgres-operator-group namespace: my-petclinic --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-multiarch-catalog namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/ibm/operator-registry-<architecture> 1 imagePullPolicy: IfNotPresent displayName: ibm-multiarch-catalog updateStrategy: registryPoll: interval: 30m --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: postgresql-operator-dev4devs-com namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: postgresql-operator-dev4devs-com source: ibm-multiarch-catalog sourceNamespace: openshift-marketplace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: database-view labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgresql.dev4devs.com resources: - databases verbs: - get - list EOD",
"oc get subs -n openshift-operators",
"NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable",
"oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: \"5432\" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: \"sampledb\" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: \"samplepwd\" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: \"sampleuser\" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD",
"database.postgresql.dev4devs.com/sampledatabase created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: \"true\" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: status: binding: name: hippo-pguser-hippo",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"example.com\" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments",
"apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service workload: apiVersion: apps/v1 kind: Deployment name: spring-petclinic",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"service.binding(/<NAME>)?: \"<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)\"",
"apiVersion: apps.example.org/v1beta1 kind: Database metadata: name: my-db namespace: my-petclinic annotations: service.binding/username: path={.spec.name},optional=true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/type\": \"postgresql\" 1",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/uri\": \"path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url\" spec: status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com",
"status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/tags\": \"path={.spec.tags},elementType=sliceOfStrings\" spec: tags: - knowledge - is - power",
"/bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power",
"spec: tags: - knowledge - is - power",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/url\": \"path={.spec.connections},elementType=sliceOfStrings,sourceValue=url\" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com",
"USDSERVICE_BINDING_ROOT 1 ├── account-database 2 │ ├── type 3 │ ├── provider 4 │ ├── uri │ ├── username │ └── password └── transaction-event-stream 5 ├── type ├── connection-count ├── uri ├── certificates └── private-key",
"import os username = os.getenv(\"USERNAME\") password = os.getenv(\"PASSWORD\")",
"from pyservicebinding import binding try: sb = binding.ServiceBinding() except binding.ServiceBindingRootMissingError as msg: # log the error message and retry/exit print(\"SERVICE_BINDING_ROOT env var not set\") sb = binding.ServiceBinding() bindings_list = sb.bindings(\"postgresql\")",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments",
"host: hippo-pgbouncer port: 5432",
"DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432",
"application: name: spring-petclinic group: apps version: v1 resource: deployments",
"services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo",
"DATABASE_HOST: hippo-pgbouncer",
"POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: application: labelSelector: 1 matchLabels: environment: production group: apps version: v1 resource: deployments services: group: \"\" version: v1 kind: Secret name: super-secret-data",
"apiVersion: servicebindings.io/v1beta1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: workload: selector: 1 matchLabels: environment: production apiVersion: app/v1 kind: Deployment service: apiVersion: v1 kind: Secret name: super-secret-data",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: \"31793\" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {}",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: \"\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1",
"apiVersion: servicebinding.io/v1beta1 kind: ClusterWorkloadResourceMapping metadata: name: cronjobs.batch 1 spec: versions: - version: \"v1\" 2 annotations: .spec.jobTemplate.spec.template.metadata.annotations 3 containers: - path: .spec.jobTemplate.spec.template.spec.containers[*] 4 - path: .spec.jobTemplate.spec.template.spec.initContainers[*] name: .name 5 env: .env 6 volumeMounts: .volumeMounts 7 volumes: .spec.jobTemplate.spec.template.spec.volumes 8",
"oc delete ServiceBinding <.metadata.name>",
"oc delete ServiceBinding spring-petclinic-pgcluster",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"projecthelmchartrepository.helm.openshift.io/azure-sample-repo created",
"oc get projecthelmchartrepositories --namespace my-namespace",
"NAME AGE azure-sample-repo 1m",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA",
"oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true",
"oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true",
"oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b",
"apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6",
"apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5",
"apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9",
"oc create -f <file> [-n <project_name>]",
"oc create -f core-object-counts.yaml -n demoproject",
"oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1",
"oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4",
"resourcequota \"test\" created",
"oc describe quota test",
"Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4",
"oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'",
"openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0",
"apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1",
"oc create -f gpu-quota.yaml",
"resourcequota/gpu-quota created",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1",
"apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1",
"oc create -f gpu-pod.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1",
"oc create -f gpu-pod.yaml",
"Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1",
"oc get quota -n demoproject",
"NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10",
"oc describe quota core-object-counts -n demoproject",
"Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7",
"oc create -f template.yaml -n openshift-config",
"oc get templates -n openshift-config",
"oc edit template <project_request_template> -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request",
"oc new-project <project_name>",
"oc get resourcequotas",
"oc describe resourcequotas <resource_quota_name>",
"oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"",
"oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend",
"oc describe AppliedClusterResourceQuota",
"Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8",
"kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19",
"oc create -f <file-name>.yaml",
"oc describe pod my-application",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container",
"oc describe pod pod1",
". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms",
"oc adm prune <object_type> <options>",
"oc adm prune groups --sync-config=path/to/sync/config [<options>]",
"oc adm prune groups --sync-config=ldap-sync-config.yaml",
"oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm",
"oc adm prune deployments [<options>]",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"oc adm prune builds [<options>]",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"",
"oc create -f <filename>.yaml",
"kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner",
"oc adm prune images [<options>]",
"oc rollout restart deployment/image-registry -n openshift-image-registry",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m",
"oc adm prune images --prune-over-size-limit",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm",
"oc adm prune images --prune-over-size-limit --confirm",
"oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'",
"myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1",
"error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client",
"error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]",
"error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge",
"service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)",
"oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'",
"time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'",
"Deleted 13374 blobs Freed up 2.835 GiB of disk space",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge",
"oc idle <service>",
"oc idle --resource-names-file <filename>",
"oc scale --replicas=1 dc <dc_name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/building_applications/index |
Chapter 3. Frequently asked questions | Chapter 3. Frequently asked questions Do you have questions about Trusted Profile Analyzer? Here is a collection of common questions and their answers to help you understand more about Red Hat's Trusted Profile Analyzer service. Q: What is Red Hat's Trusted Profile Analyzer service? Q: How can I use Red Hat's Trusted Profile Analyzer service? Q: What kind of content will be available with the Trusted Profile Analyzer service? Q: What initial content will be available with the Trusted Profile Analyzer Service? Q: How does a Trusted Profile Analyzer SBOM help me? Q: Who uses Red Hat's Trusted Profile Analyzer service? Q: To use Red Hat's Trusted Profile Analyzer service, do I need to learn anything new, or change my development workflows and processes? Q: I am not a Quarkus Java developer, can I still gain any value from Red Hat's Trusted Profile Analyzer service? Q: What is Red Hat's Trusted Profile Analyzer service? A: Red Hat's Trusted Profile Analyzer service is a proactive service that helps you evaluate the security and vulnerability risks of using Open Source Software (OSS) packages and dependencies in your application stack. Q: How can I use Red Hat's Trusted Profile Analyzer service? A: There are two ways you can use Red Hat's Trusted Profile Analyzer service. First, by using the Dependency Analytics extension for integrated development environment (IDE) platforms, such as Microsoft's Visual Studio Code, or Jet Brains' IntelliJ IDEA. Using Dependency Analytics gives you in-line guidance on vulnerabilities as you write your application. Second, by searching for Software Bill of Materials (SBOM) and Vulnerability Exploitability eXchange (VEX) information for Red Hat products on Red Hat's Hybrid Cloud Console . Q: What kind of content will be available with the Trusted Profile Analyzer service? A: You have access to application libraries for Java, NodeJS, Python, Go, and Red Hat Enterprise Linux packages. Vulnerability information about open source packages comes directly from internal Red Hat resources, Red Hat's partner ecosystem, such as Snyk, and open source community data sources. Q: What initial content will be available with the Trusted Profile Analyzer Service? A: The following content will be available: Quarkus Java Framework for Java Archive (JAR) files with associated SBOM files. Red Hat Enterprise Linux Universal Base Image (UBI) version 8 and 9 with associated SBOM files. Vulnerability information about open source Java packages. Q: How does a Trusted Profile Analyzer SBOM help me? A: A Trusted Profile Analyzer Software Bill of Materials (SBOM) can help you by understanding the software components within an application stack, and any related vulnerabilities those software components can have. An SBOM can improve visibility and transparency of open source code within the software supply chain by component's provenance, license information, and attestation of how it was built. Q: Who uses Red Hat's Trusted Profile Analyzer service? A: The primary audience for Red Hat's Trusted Profile Analyzer service is Quarkus Java developers, and cloud-native container image builders that uses the Red Hat Enterprise Linux UBI. Q: To use Red Hat's Trusted Profile Analyzer service, do I need to learn anything new, or change my development workflows and processes? A: No. Q: I am not a Quarkus Java developer, can I still gain any value from Red Hat's Trusted Profile Analyzer service? A: Yes. The Trusted Profile Analyzer service still provides security risk information about open source packages that are not currently included in the Trusted Profile Analyzer repository. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/reference_guide/frequently-asked-questions_ref |
8.239. xorg-x11-drv-mga | 8.239. xorg-x11-drv-mga 8.239.1. RHBA-2013:1610 - xorg-x11-drv-mga bug fix update Updated xorg-x11-drv-mga packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-mga packages provide a video driver for Matrox G-series chipsets for the X.Org implementation of the X Window System. Bug Fixes BZ# 894959 Prior to this update, the graphical user interface could appear distorted on 19-inch monitors with the 16:9 ratio. The xorg-x11-drv-mga packages have been fixed, and so the distortion no longer occurs in this scenario. BZ# 918017 Previously, resolutions higher than 1440x900 were not available with Red Hat Enterprise Linux 6.4 using the MGA G200e chips. Consequently, the Matrox driver did not allow native resolutions to be reached for many monitors. With this update, the X Server no longer discards larger resolution modes, and resolutions higher than 1440x900 are now available. Users of xorg-x11-drv-mga are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/xorg-x11-drv-mga |
Chapter 5. Contexts and Dependency Injection (CDI) in Camel Quarkus | Chapter 5. Contexts and Dependency Injection (CDI) in Camel Quarkus CDI plays a central role in Quarkus and Camel Quarkus offers a first class support for it too. You may use @Inject , @ConfigProperty and similar annotations e.g. to inject beans and configuration values to your Camel RouteBuilder , for example: import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = "timer.period", defaultValue = "1000") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF("timer:foo?period=%s", period) .setBody(exchange -> "Incremented the counter: " + counter.increment()) .to("log:cdi-example?showExchangePattern=false&showBodyType=false"); } } 1 The @ApplicationScoped annotation is required for @Inject and @ConfigProperty to work in a RouteBuilder . Note that the @ApplicationScoped beans are managed by the CDI container and their life cycle is thus a bit more complex than the one of the plain RouteBuilder . In other words, using @ApplicationScoped in RouteBuilder comes with some boot time penalty and you should therefore only annotate your RouteBuilder with @ApplicationScoped when you really need it. 2 The value for the timer.period property is defined in src/main/resources/application.properties of the example project. Tip Refer to the Quarkus Dependency Injection guide for more details. 5.1. Accessing CamelContext To access CamelContext just inject it into your bean: import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } } 5.2. @EndpointInject and @Produce If you are used to @org.apache.camel.EndpointInject and @org.apache.camel.Produce from plain Camel or from Camel on SpringBoot, you can continue using them on Quarkus too. The following use cases are supported by org.apache.camel.quarkus:camel-quarkus-core : import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject("direct:myDirect1") ProducerTemplate producerTemplate; @EndpointInject("direct:myDirect2") FluentProducerTemplate fluentProducerTemplate; @EndpointInject("direct:myDirect3") DirectEndpoint directEndpoint; @Produce("direct:myDirect4") ProducerTemplate produceProducer; @Produce("direct:myDirect5") FluentProducerTemplate produceProducerFluent; } You can use any other Camel producer endpoint URI instead of direct:myDirect* . Warning @EndpointInject and @Produce are not supported on setter methods - see #2579 The following use case is supported by org.apache.camel.quarkus:camel-quarkus-bean : import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce("direct:myDirect6") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello("Kermit") } } 5.3. CDI and the Camel Bean component 5.3.1. Refer to a bean by name To refer to a bean in a route definition by name, just annotate the bean with @Named("myNamedBean") and @ApplicationScoped (or some other supported scope). The @RegisterForReflection annotation is important for the native mode. import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named("myNamedBean") @RegisterForReflection public class NamedBean { public String hello(String name) { return "Hello " + name + " from the NamedBean"; } } Then you can use the myNamedBean name in a route definition: import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from("direct:named") .bean("myNamedBean", "hello"); /* ... which is an equivalent of the following: */ from("direct:named") .to("bean:myNamedBean?method=hello"); } } As an alternative to @Named , you may also use io.smallrye.common.annotation.Identifier to name and identify a bean. import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier("myBeanIdentifier") @RegisterForReflection public class MyBean { public String hello(String name) { return "Hello " + name + " from MyBean"; } } Then refer to the identifier value within the Camel route: import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from("direct:start") .bean("myBeanIdentifier", "Camel"); } } Note We aim at supporting all use cases listed in Bean binding section of Camel documentation. Do not hesitate to file an issue if some bean binding scenario does not work for you. 5.3.2. @Consume Since Camel Quarkus 2.0.0, the camel-quarkus-bean artifact brings support for @org.apache.camel.Consume - see the Pojo consuming section of Camel documentation. Declaring a class like the following import org.apache.camel.Consume; public class Foo { @Consume("activemq:cheese") public void onCheese(String name) { ... } } will automatically create the following Camel route from("activemq:cheese").bean("foo1234", "onCheese") for you. Note that Camel Quarkus will implicitly add @jakarta.inject.Singleton and jakarta.inject.Named("foo1234") to the bean class, where 1234 is a hash code obtained from the fully qualified class name. If your bean has some CDI scope (such as @ApplicationScoped ) or @Named("someName") set already, those will be honored in the auto-created route. | [
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = \"timer.period\", defaultValue = \"1000\") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF(\"timer:foo?period=%s\", period) .setBody(exchange -> \"Incremented the counter: \" + counter.increment()) .to(\"log:cdi-example?showExchangePattern=false&showBodyType=false\"); } }",
"import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } }",
"import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject(\"direct:myDirect1\") ProducerTemplate producerTemplate; @EndpointInject(\"direct:myDirect2\") FluentProducerTemplate fluentProducerTemplate; @EndpointInject(\"direct:myDirect3\") DirectEndpoint directEndpoint; @Produce(\"direct:myDirect4\") ProducerTemplate produceProducer; @Produce(\"direct:myDirect5\") FluentProducerTemplate produceProducerFluent; }",
"import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce(\"direct:myDirect6\") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello(\"Kermit\") } }",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named(\"myNamedBean\") @RegisterForReflection public class NamedBean { public String hello(String name) { return \"Hello \" + name + \" from the NamedBean\"; } }",
"import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:named\") .bean(\"myNamedBean\", \"hello\"); /* ... which is an equivalent of the following: */ from(\"direct:named\") .to(\"bean:myNamedBean?method=hello\"); } }",
"import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier(\"myBeanIdentifier\") @RegisterForReflection public class MyBean { public String hello(String name) { return \"Hello \" + name + \" from MyBean\"; } }",
"import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:start\") .bean(\"myBeanIdentifier\", \"Camel\"); } }",
"import org.apache.camel.Consume; public class Foo { @Consume(\"activemq:cheese\") public void onCheese(String name) { } }",
"from(\"activemq:cheese\").bean(\"foo1234\", \"onCheese\")"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-cdi |
function::task_tid | function::task_tid Name function::task_tid - The thread identifier of the task Synopsis Arguments task task_struct pointer Description This function returns the thread id of the given task. | [
"task_tid:long(task:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-tid |
A.2. Investigating kinit Authentication Failures | A.2. Investigating kinit Authentication Failures General Troubleshooting On the IdM client, display the debug messages from the kinit process: Verify that: The client forward record is correct both on the server and on the affected client: The server forward record is correct both on the server and on the affected client: The host server_IP_address command must return a fully qualified host name with a trailing dot at the end, such as: Review the /etc/hosts file on the client, and make sure that: All server entries in the file are correct In all server entries, the first name is a fully qualified domain name See also the section called "The /etc/hosts File" . Make sure you meet the other conditions in Section 2.1.5, "Host Name and DNS Configuration" . On the IdM server, make sure that the krb5kdc and dirsrv services are running: Review the Kerberos key distribution center (KDC) log: /var/log/krb5kdc.log . If the KDCs are hard-coded in the /etc/krb5.conf file (the file explicitly sets KDC directives and uses the dns_lookup_kdc = false setting), use the ipactl status command on each master server. Check the status of the IdM services on each server listed as KDC by the command: Troubleshooting Errors Cannot find KDC for realm If kinit authentication fails with an error that says Cannot find KDC for realm "EXAMPLE.COM" while getting initial credentials , it indicates that KDC is not running on the server or that the client has misconfigured DNS. In this situation, try these steps: If the DNS discovery is enabled in the /etc/krb5.conf file (the dns_lookup_kdc = true setting), use the dig utility to check whether the following records are resolvable: In the following example, one of the dig commands above failed with this output: The output indicated that the named service was not running on the master server. If DNS lookup fails, continue with the steps in Section A.6, "Troubleshooting DNS" . Related Information See Section C.2, "Identity Management Log Files and Directories" for descriptions of various Identity Management log files. | [
"KRB5_TRACE=/dev/stdout kinit admin",
"host client_fully_qualified_domain_name",
"host server_fully_qualified_domain_name",
"host server_IP_address",
"server.example.com.",
"systemctl status krb5kdc # systemctl status dirsrv.target",
"ipactl status Directory Service: RUNNING krb5kdc Service: RUNNING kadmin Service: RUNNING named Service: RUNNING httpd Service: RUNNING ipa-custodia Service: RUNNING ntpd Service: RUNNING pki-tomcatd Service: RUNNING ipa-otpd Service: RUNNING ipa-dnskeysyncd Service: RUNNING ipa: INFO: The ipactl command was successful",
"dig -t TXT _kerberos. ipa.example.com USD dig -t SRV _kerberos._udp. ipa.example.com USD dig -t SRV _kerberos._tcp. ipa.example.com",
"; <<>> DiG 9.11.0-P2-RedHat-9.11.0-6.P2.fc25 <<>> -t SRV _kerberos._tcp.ipa.server.example ;; global options: +cmd ;; connection timed out; no servers could be reached"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-gen-kinit-auth |
34.2. Configuring the Subscription Service | 34.2. Configuring the Subscription Service The products installed on a system (including the operating system itself) are covered by subscriptions . A subscription service is used to track registered systems, the products installed on those systems, and the subscriptions attached to the system to cover those products. The Subscription Management Registration screens identify which subscription service to use and, by default, attach the best-matched subscriptions to the system. More information about subscription management is available in the Red Hat Subscription Management guide. 34.2.1. Set Up Software Updates The first step is to select whether to register the system immediately with a subscription service. To register the system, select Yes, I'd like to register now , and click Forward . Figure 34.3. Set Up Software Updates Note Even if a system is not registered at firstboot, it can be registered with any of those three subscription services later, using the Red Hat Subscription Manager tools [13] . More information about the Red Hat Subscription Manager tools can be found in the Red Hat Subscription Management Guide . [13] Systems can also be registered with Satellite or RHN Classic. For Satellite information, see the Satellite documentation. For information on using RHN Classic, see the appendix in the Red Hat Subscription Management Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-firstboot-updates |
Chapter 3. Installing OpenShift Virtualization | Chapter 3. Installing OpenShift Virtualization 3.1. Preparing your cluster for OpenShift Virtualization Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements. Important You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration. FIPS mode If you install your cluster in FIPS mode , no additional setup is required for OpenShift Virtualization. 3.1.1. Hardware and operating system requirements Review the following hardware and operating system requirements for OpenShift Virtualization. Supported platforms On-premise bare metal servers Amazon Web Services bare metal instances Important Installing OpenShift Virtualization on an AWS bare metal instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Bare metal instances or servers offered by other cloud providers are not supported. CPU requirements Supported by Red Hat Enterprise Linux (RHEL) 8 Support for Intel 64 or AMD64 CPU extensions Intel VT or AMD-V hardware virtualization extensions enabled NX (no execute) flag enabled Storage requirements Supported by OpenShift Container Platform Operating system requirements Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes Note RHEL worker nodes are not supported. Additional resources About RHCOS Red Hat Ecosystem Catalog for supported CPUs Supported storage 3.1.2. Physical resource overhead requirements OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance. Important The numbers noted in this documentation are based on Red Hat's test methodology and setup. These numbers can vary based on your own individual setup and environments. 3.1.2.1. Memory overhead Calculate the memory overhead values for OpenShift Virtualization by using the equations below. Cluster memory overhead Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes. Virtual machine memory overhead 1 Number of virtual CPUs requested by the virtual machine 2 Number of virtual graphics cards requested by the virtual machine If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device. 3.1.2.2. CPU overhead Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup. Cluster CPU overhead OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes. Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads. Virtual machine CPU overhead If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires. 3.1.2.3. Storage overhead Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment. Cluster storage overhead 10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization. Virtual machine storage overhead Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself. 3.1.2.4. Example As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores. 3.1.3. Object maximums You must consider the following tested object maximums when planning your cluster: OpenShift Container Platform object maximums OpenShift Virtualization object maximums 3.1.4. Restricted network environments If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for restricted networks . If you have limited internet connectivity, you can configure proxy support in Operator Lifecycle Manager to access the Red Hat-provided OperatorHub. 3.1.5. Live migration Live migration has the following requirements: Shared storage with ReadWriteMany (RWX) access mode Sufficient RAM and network bandwidth Appropriate CPUs with sufficient capacity on the worker nodes. If the CPUs have different capacities, live migration might be very slow or fail. 3.1.6. Snapshots and cloning See OpenShift Virtualization storage features for snapshot and cloning requirements. 3.1.7. Cluster high-availability Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks . Note In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens with VMs that ran on the failed node depends on a series of conditions. See About RunStrategies for virtual machines for more detailed information about the potential outcomes and how RunStrategies affect those outcomes. 3.2. Specifying nodes for OpenShift Virtualization components Specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules. Note You can configure node placement for some components after installing OpenShift Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads. 3.2.1. About node placement for virtualization components You might want to customize where OpenShift Virtualization deploys its components to ensure that: Virtual machines only deploy on nodes that are intended for virtualization workloads. Operators only deploy on infrastructure nodes. Certain nodes are unaffected by OpenShift Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OpenShift Virtualization. 3.2.1.1. How to apply node placement rules to virtualization components You can specify node placement rules for a component by editing the corresponding object directly or by using the web console. For the OpenShift Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM Subscription object directly. Currently, you cannot configure node placement rules for the Subscription object by using the web console. For components that the OpenShift Virtualization Operators deploy, edit the HyperConverged object directly or configure it by using the web console during OpenShift Virtualization installation. For the hostpath provisioner, edit the HostPathProvisioner object directly or configure it by using the web console. Warning You must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. Depending on the object, you can use one or more of the following rule types: nodeSelector Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied. tolerations Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint. 3.2.1.2. Node placement in the OLM Subscription object To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription object during OpenShift Virtualization installation. You can include node placement rules in the spec.config field, as shown in the following example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: "stable" config: 1 1 The config field supports nodeSelector and tolerations , but it does not support affinity . 3.2.1.3. Node placement in the HyperConverged object To specify the nodes where OpenShift Virtualization deploys its components, you can include the nodePlacement object in the HyperConverged Cluster custom resource (CR) file that you create during OpenShift Virtualization installation. You can include nodePlacement under the spec.infra and spec.workloads fields, as shown in the following example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: 1 ... workloads: nodePlacement: ... 1 The nodePlacement fields support nodeSelector , affinity , and tolerations fields. 3.2.1.4. Node placement in the HostPathProvisioner object You can configure node placement rules in the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner. apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" useNamingPrefix: false workload: 1 1 The workload field supports nodeSelector , affinity , and tolerations fields. 3.2.1.5. Additional resources Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints Installing OpenShift Virtualization using the CLI Installing OpenShift Virtualization using the web console Configuring local storage for virtual machines 3.2.2. Example manifests The following example YAML files use nodePlacement , affinity , and tolerations objects to customize node placement for OpenShift Virtualization components. 3.2.2.1. Operator Lifecycle Manager Subscription object 3.2.2.1.1. Example: Node placement with nodeSelector in the OLM Subscription object In this example, nodeSelector is configured so that OLM places the OpenShift Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value . apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: "stable" config: nodeSelector: example.io/example-infra-key: example-infra-value 3.2.2.1.2. Example: Node placement with tolerations in the OLM Subscription object In this example, nodes that are reserved for OLM to deploy OpenShift Virtualization Operators are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes. apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: "stable" config: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" 3.2.2.2. HyperConverged object 3.2.2.2.1. Example: Node placement with nodeSelector in the HyperConverged Cluster CR In this example, nodeSelector is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 3.2.2.2.2. Example: Node placement with affinity in the HyperConverged Cluster CR In this example, affinity is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value . Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: gt values: - 8 3.2.2.2.3. Example: Node placement with tolerations in the HyperConverged Cluster CR In this example, nodes that are reserved for OpenShift Virtualization components are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" 3.2.2.3. HostPathProvisioner object 3.2.2.3.1. Example: Node placement with nodeSelector in the HostPathProvisioner object In this example, nodeSelector is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value . apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 3.3. Installing OpenShift Virtualization using the web console Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can use the OpenShift Container Platform 4.7 web console to subscribe to and deploy the OpenShift Virtualization Operators. 3.3.1. Installing the OpenShift Virtualization Operator You can install the OpenShift Virtualization Operator from the OpenShift Container Platform web console. Prerequisites Install OpenShift Container Platform 4.7 on your cluster. Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions. Procedure Open a browser window and log in to the OpenShift Container Platform web console. From the Administrator perspective, click Operators OperatorHub . In the Filter by keyword field, type OpenShift Virtualization . Select the OpenShift Virtualization tile. Read the information about the Operator and click Install . On the Install Operator page: Select stable from the list of available Update Channel options. This ensures that: You install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. When you update OpenShift Container Platform, OpenShift Virtualization automatically updates to the minor version. For Installed Namespace , ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist. Warning Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail. For Approval Strategy , ensure that Automatic , which is the default value, is selected. OpenShift Virtualization automatically updates when a new z-stream release is available. Click Install to make the Operator available to the openshift-cnv namespace. When the Operator installs successfully, click Create HyperConverged . Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components. Click Create to launch OpenShift Virtualization. Verification Navigate to the Workloads Pods page and monitor the OpenShift Virtualization pods until they are all Running . After all the pods display the Running state, you can use OpenShift Virtualization. 3.3.2. steps You might want to additionally configure the following components: The KubeMacPool component provides a MAC address pool service for virtual machine NICs in designated namespaces. Enable a MAC address pool in a namespace by applying the KubeMacPool label to that namespace. The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first. Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can subscribe to and deploy the OpenShift Virtualization Operators by using the command line to apply manifests to your cluster. Note To specify the nodes where you want OpenShift Virtualization to install its components, configure node placement rules . 3.3.3. Prerequisites Install OpenShift Container Platform 4.7 on your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. 3.3.4. Subscribing to the OpenShift Virtualization catalog by using the CLI Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators. To subscribe, configure Namespace , OperatorGroup , and Subscription objects by applying a single manifest to your cluster. Procedure Create a YAML file that contains the following manifest: apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: "stable" 1 1 Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. Create the required Namespace , OperatorGroup , and Subscription objects for OpenShift Virtualization by running the following command: USD oc apply -f <file name>.yaml 3.3.5. Deploying the OpenShift Virtualization Operator by using the CLI You can deploy the OpenShift Virtualization Operator by using the oc CLI. Prerequisites An active subscription to the OpenShift Virtualization catalog in the openshift-cnv namespace. Procedure Create a YAML file that contains the following manifest: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: Deploy the OpenShift Virtualization Operator by running the following command: USD oc apply -f <file_name>.yaml Verification Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command: USD watch oc get csv -n openshift-cnv The following output displays if deployment was successful: Example output NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v2.6.10 OpenShift Virtualization 2.6.10 Succeeded 3.3.6. steps You might want to additionally configure the following components: The KubeMacPool component provides a MAC address pool service for virtual machine NICs in designated namespaces. Enable a MAC address pool in a namespace by applying the KubeMacPool label to that namespace. The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first. 3.4. Installing the virtctl client The virtctl client is a command-line utility for managing OpenShift Virtualization resources. It is available for Linux, macOS, and Windows distributions. You can install the virtctl client from the OpenShift Virtualization web console or by enabling the OpenShift Virtualization repository and installing the kubevirt-virtctl package. 3.4.1. Installing the virtctl client from the web console You can download the virtctl client from the Red Hat Customer Portal, which is linked to in your OpenShift Virtualization web console in the Command Line Tools page. Prerequisites You must have an activated OpenShift Container Platform subscription to access the download page on the Customer Portal. Procedure Access the Customer Portal by clicking the icon, which is in the upper-right corner of the web console, and selecting Command Line Tools . Ensure you have the appropriate version for your cluster selected from the Version: list. Download the virtctl client for your distribution. All downloads are in tar.gz format. Extract the tarball. The following CLI command extracts it into the same directory as the tarball and is applicable for all distributions: USD tar -xvf <virtctl-version-distribution.arch>.tar.gz For Linux and macOS: Navigate the extracted folder hierachy and make the virtctl binary executable: USD chmod +x <virtctl-file-name> Move the virtctl binary to a directory on your PATH. To check your path, run: USD echo USDPATH For Windows users: Navigate the extracted folder hierarchy and double-click the virtctl executable file to install the client. 3.4.2. Enabling OpenShift Virtualization repositories Red Hat offers OpenShift Virtualization repositories for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 7: Red Hat Enterprise Linux 8 repository: cnv-2.6-for-rhel-8-x86_64-rpms Red Hat Enterprise Linux 7 repository: rhel-7-server-cnv-2.6-rpms The process for enabling the repository in subscription-manager is the same in both platforms. Procedure Enable the appropriate OpenShift Virtualization repository for your system by running the following command: # subscription-manager repos --enable <repository> 3.4.3. Installing the virtctl client Install the virtctl client from the kubevirt-virtctl package. Procedure Install the kubevirt-virtctl package: # yum install kubevirt-virtctl 3.4.4. Additional resources Using the CLI tools for OpenShift Virtualization. 3.5. Uninstalling OpenShift Virtualization using the web console You can uninstall OpenShift Virtualization by using the OpenShift Container Platform web console . 3.5.1. Prerequisites You must have OpenShift Virtualization 2.6 installed. You must delete all virtual machines , virtual machine instances , and data volumes . Important Attempting to uninstall OpenShift Virtualization without deleting these objects results in failure. 3.5.2. Deleting the OpenShift Virtualization Operator Deployment custom resource To uninstall OpenShift Virtualization, you must first delete the OpenShift Virtualization Operator Deployment custom resource. Prerequisites Create the OpenShift Virtualization Operator Deployment custom resource. Procedure From the OpenShift Container Platform web console, select openshift-cnv from the Projects list. Navigate to the Operators Installed Operators page. Click OpenShift Virtualization . Click the OpenShift Virtualization Operator Deployment tab. Click the Options menu in the row containing the kubevirt-hyperconverged custom resource. In the expanded menu, click Delete HyperConverged Cluster . Click Delete in the confirmation window. Navigate to the Workloads Pods page to verify that only the Operator pods are running. Open a terminal window and clean up the remaining resources by running the following command: USD oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv 3.5.3. Deleting the OpenShift Virtualization catalog subscription To finish uninstalling OpenShift Virtualization, delete the OpenShift Virtualization catalog subscription. Prerequisites An active subscription to the OpenShift Virtualization catalog Procedure Navigate to the Operators OperatorHub page. Search for OpenShift Virtualization and then select it. Click Uninstall . Note You can now delete the openshift-cnv namespace. 3.5.4. Deleting a namespace using the web console You can delete a namespace by using the OpenShift Container Platform web console. Note If you do not have permissions to delete the namespace, the Delete Namespace option is not available. Procedure Navigate to Administration Namespaces . Locate the namespace that you want to delete in the list of namespaces. On the far right side of the namespace listing, select Delete Namespace from the Options menu . When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field. Click Delete . 3.6. Uninstalling OpenShift Virtualization using the CLI You can uninstall OpenShift Virtualization by using the OpenShift Container Platform CLI . 3.6.1. Prerequisites You must have OpenShift Virtualization 2.6 installed. You must delete all virtual machines , virtual machine instances , and data volumes . Important Attempting to uninstall OpenShift Virtualization without deleting these objects results in failure. 3.6.2. Deleting OpenShift Virtualization You can delete OpenShift Virtualization by using the CLI. Prerequisites Install the OpenShift CLI ( oc ). Access to a OpenShift Virtualization cluster using an account with cluster-admin permissions. Note When you delete the subscription of the OpenShift Virtualization operator in the OLM by using the CLI, the ClusterServiceVersion (CSV) object is not deleted from the cluster. To completely uninstall OpenShift Virtualization, you must explicitly delete the CSV. Procedure Delete the HyperConverged custom resource: USD oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv Delete the subscription of the OpenShift Virtualization operator in the Operator Lifecycle Manager (OLM): USD oc delete subscription kubevirt-hyperconverged -n openshift-cnv Set the cluster service version (CSV) name for OpenShift Virtualization as an environment variable: USD CSV_NAME=USD(oc get csv -n openshift-cnv -o=custom-columns=:metadata.name) Delete the CSV from the OpenShift Virtualization cluster by specifying the CSV name from the step: USD oc delete csv USD{CSV_NAME} -n openshift-cnv OpenShift Virtualization is uninstalled when a confirmation message indicates that the CSV was deleted successfully: Example output clusterserviceversion.operators.coreos.com "kubevirt-hyperconverged-operator.v2.6.10" deleted | [
"Memory overhead per infrastructure node ~ 150 MiB",
"Memory overhead per worker node ~ 360 MiB",
"Memory overhead per virtual machine ~ (1.002 * requested memory) + 146 MiB + 8 MiB * (number of vCPUs) \\ 1 + 16 MiB * (number of graphics devices) 2",
"CPU overhead for infrastructure nodes ~ 4 cores",
"CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine",
"Aggregated storage overhead per node ~ 10 GiB",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: \"stable\" config: 1",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: 1 workloads: nodePlacement:",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: gt values: - 8",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value",
"apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: \"stable\" 1",
"oc apply -f <file name>.yaml",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:",
"oc apply -f <file_name>.yaml",
"watch oc get csv -n openshift-cnv",
"NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v2.6.10 OpenShift Virtualization 2.6.10 Succeeded",
"tar -xvf <virtctl-version-distribution.arch>.tar.gz",
"chmod +x <virtctl-file-name>",
"echo USDPATH",
"subscription-manager repos --enable <repository>",
"yum install kubevirt-virtctl",
"oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv",
"oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv",
"oc delete subscription kubevirt-hyperconverged -n openshift-cnv",
"CSV_NAME=USD(oc get csv -n openshift-cnv -o=custom-columns=:metadata.name)",
"oc delete csv USD{CSV_NAME} -n openshift-cnv",
"clusterserviceversion.operators.coreos.com \"kubevirt-hyperconverged-operator.v2.6.10\" deleted"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/openshift_virtualization/installing-openshift-virtualization |
Chapter 40. Using Ansible to manage the replication topology in IdM | Chapter 40. Using Ansible to manage the replication topology in IdM You can maintain multiple Identity Management (IdM) servers and let them replicate each other for redundancy purposes to mitigate or prevent server loss. For example, if one server fails, the other servers keep providing services to the domain. You can also recover the lost server by creating a new replica based on one of the remaining servers. Data stored on an IdM server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. The data that is replicated is stored in the topology suffixes . When two replicas have a replication agreement between their suffixes, the suffixes form a topology segment . This chapter describes how to use Ansible to manage IdM replication agreements, topology segments, and topology suffixes. 40.1. Using Ansible to ensure a replication agreement exists in IdM Data stored on an Identity Management (IdM) server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Follow this procedure to use an Ansible playbook to ensure that a replication agreement of the domain type exists between server.idm.example.com and replica.idm.example.com . Prerequisites Ensure that you understand the recommendations for designing your IdM topology listed in Guidelines for connecting IdM replicas in a topology . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the add-topologysegment.yml Ansible playbook file provided by the ansible-freeipa package: Open the add-topologysegment-copy.yml file for editing. Adapt the file by setting the following variables in the ipatopologysegment task section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. Set the suffix variable to either domain or ca , depending on what type of segment you want to add. Set the left variable to the name of the IdM server that you want to be the left node of the replication agreement. Set the right variable to the name of the IdM server that you want to be the right node of the replication agreement. Ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 40.2. Using Ansible to ensure replication agreements exist between multiple IdM replicas Data stored on an Identity Management (IdM) server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Follow this procedure to ensure replication agreements exist between multiple pairs of replicas in IdM. Prerequisites Ensure that you understand the recommendations for designing your IdM topology listed in Connecting the replicas in a topology . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the add-topologysegments.yml Ansible playbook file provided by the ansible-freeipa package: Open the add-topologysegments-copy.yml file for editing. Adapt the file by setting the following variables in the vars section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. For every topology segment, add a line in the ipatopology_segments section and set the following variables: Set the suffix variable to either domain or ca , depending on what type of segment you want to add. Set the left variable to the name of the IdM server that you want to be the left node of the replication agreement. Set the right variable to the name of the IdM server that you want to be the right node of the replication agreement. In the tasks section of the add-topologysegments-copy.yml file, ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 40.3. Using Ansible to check if a replication agreement exists between two replicas Data stored on an Identity Management (IdM) server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Follow this procedure to verify that replication agreements exist between multiple pairs of replicas in IdM. In contrast to Using Ansible to ensure a replication agreement exists in IdM , this procedure does not modify the existing configuration. Prerequisites Ensure that you understand the recommendations for designing your Identity Management (IdM) topology listed in Connecting the replicas in a topology . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the check-topologysegments.yml Ansible playbook file provided by the ansible-freeipa package: Open the check-topologysegments-copy.yml file for editing. Adapt the file by setting the following variables in the vars section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. For every topology segment, add a line in the ipatopology_segments section and set the following variables: Set the suffix variable to either domain or ca , depending on the type of segment you are adding. Set the left variable to the name of the IdM server that you want to be the left node of the replication agreement. Set the right variable to the name of the IdM server that you want to be the right node of the replication agreement. In the tasks section of the check-topologysegments-copy.yml file, ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 40.4. Using Ansible to verify that a topology suffix exists in IdM In the context of replication agreements in Identity Management (IdM), topology suffixes store the data that is replicated. IdM supports two types of topology suffixes: domain and ca . Each suffix represents a separate back end, a separate replication topology. When a replication agreement is configured, it joins two topology suffixes of the same type on two different servers. The domain suffix contains all domain-related data, such as data about users, groups, and policies. The ca suffix contains data for the Certificate System component. It is only present on servers with a certificate authority (CA) installed. Follow this procedure to use an Ansible playbook to ensure that a topology suffix exists in IdM. The example describes how to ensure that the domain suffix exists in IdM. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the verify-topologysuffix.yml Ansible playbook file provided by the ansible-freeipa package: Open the verify-topologysuffix-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipatopologysuffix section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. Set the suffix variable to domain . If you are verifying the presence of the ca suffix, set the variable to ca . Ensure that the state variable is set to verified . No other option is possible. This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 40.5. Using Ansible to reinitialize an IdM replica If a replica has been offline for a long period of time or its database has been corrupted, you can reinitialize it. Reinitialization refreshes the replica with an updated set of data. Reinitialization can, for example, be used if an authoritative restore from backup is required. Note In contrast to replication updates, during which replicas only send changed entries to each other, reinitialization refreshes the whole database. The local host on which you run the command is the reinitialized replica. To specify the replica from which the data is obtained, use the direction option. Follow this procedure to use an Ansible playbook to reinitialize the domain data on replica.idm.example.com from server.idm.example.com . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the reinitialize-topologysegment.yml Ansible playbook file provided by the ansible-freeipa package: Open the reinitialize-topologysegment-copy.yml file for editing. Adapt the file by setting the following variables in the ipatopologysegment section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. Set the suffix variable to domain . If you are reinitializing the ca data, set the variable to ca . Set the left variable to the left node of the replication agreement. Set the right variable to the right node of the replication agreement. Set the direction variable to the direction of the reinitializing data. The left-to-right direction means that data flows from the left node to the right node. Ensure that the state variable is set to reinitialized . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 40.6. Using Ansible to ensure a replication agreement is absent in IdM Data stored on an Identity Management (IdM) server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Follow this procedure to ensure a replication agreement between two replicas does not exist in IdM. The example describes how to ensure a replication agreement of the domain type does not exist between the replica01.idm.example.com and replica02.idm.example.com IdM servers. Prerequisites You understand the recommendations for designing your IdM topology listed in Connecting the replicas in a topology . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the delete-topologysegment.yml Ansible playbook file provided by the ansible-freeipa package: Open the delete-topologysegment-copy.yml file for editing. Adapt the file by setting the following variables in the ipatopologysegment task section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. Set the suffix variable to domain . Alternatively, if you are ensuring that the ca data are not replicated between the left and right nodes, set the variable to ca . Set the left variable to the name of the IdM server that is the left node of the replication agreement. Set the right variable to the name of the IdM server that is the right node of the replication agreement. Ensure that the state variable is set to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 40.7. Additional resources Planning the replica topology . Installing an IdM replica . | [
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/topology/add-topologysegment.yml add-topologysegment-copy.yml",
"--- - name: Playbook to handle topologysegment hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: domain left: server.idm.example.com right: replica.idm.example.com state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-topologysegment-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/topology/add-topologysegments.yml add-topologysegments-copy.yml",
"--- - name: Add topology segments hosts: ipaserver gather_facts: false vars: ipaadmin_password: \"{{ ipaadmin_password }}\" ipatopology_segments: - {suffix: domain, left: replica1.idm.example.com , right: replica2.idm.example.com } - {suffix: domain, left: replica2.idm.example.com , right: replica3.idm.example.com } - {suffix: domain, left: replica3.idm.example.com , right: replica4.idm.example.com } - {suffix: domain+ca, left: replica4.idm.example.com , right: replica1.idm.example.com } vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: \"{{ item.suffix }}\" name: \"{{ item.name | default(omit) }}\" left: \"{{ item.left }}\" right: \"{{ item.right }}\" state: present loop: \"{{ ipatopology_segments | default([]) }}\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-topologysegments-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/topology/check-topologysegments.yml check-topologysegments-copy.yml",
"--- - name: Add topology segments hosts: ipaserver gather_facts: false vars: ipaadmin_password: \"{{ ipaadmin_password }}\" ipatopology_segments: - {suffix: domain, left: replica1.idm.example.com, right: replica2.idm.example.com } - {suffix: domain, left: replica2.idm.example.com , right: replica3.idm.example.com } - {suffix: domain, left: replica3.idm.example.com , right: replica4.idm.example.com } - {suffix: domain+ca, left: replica4.idm.example.com , right: replica1.idm.example.com } vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Check topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: \"{{ item.suffix }}\" name: \"{{ item.name | default(omit) }}\" left: \"{{ item.left }}\" right: \"{{ item.right }}\" state: checked loop: \"{{ ipatopology_segments | default([]) }}\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory check-topologysegments-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/topology/ verify-topologysuffix.yml verify-topologysuffix-copy.yml",
"--- - name: Playbook to handle topologysuffix hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Verify topology suffix ipatopologysuffix: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: domain state: verified",
"ansible-playbook --vault-password-file=password_file -v -i inventory verify-topologysuffix-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/topology/reinitialize-topologysegment.yml reinitialize-topologysegment-copy.yml",
"--- - name: Playbook to handle topologysegment hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Reinitialize topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: domain left: server.idm.example.com right: replica.idm.example.com direction: left-to-right state: reinitialized",
"ansible-playbook --vault-password-file=password_file -v -i inventory reinitialize-topologysegment-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/topology/delete-topologysegment.yml delete-topologysegment-copy.yml",
"--- - name: Playbook to handle topologysegment hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: domain left: replica01.idm.example.com right: replica02.idm.example.com: state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory delete-topologysegment-copy.yml"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/using-ansible-to-manage-the-replication-topology-in-idm_configuring-and-managing-idm |
14.6. Customizing Web Services | 14.6. Customizing Web Services All of the subsystems (with the exception of the TKS) have some kind of a web-based services page for agents and some for other roles, like administrators or end entities. These web-based services pages use basic HTML and JavaScript, which can be customized to use different colors, logos, and other design elements to fit in with an existing site or intranet. 14.6.1. Customizing Subsystem Web Applications Each PKI subsystem has a corresponding web application, which contains: HTML pages containing texts, JavaScript codes, page layout, CSS formatting, and so on A web.xml file, which defines servlets, paths, security constraints, and other Links to PKI libraries. The subsystem web applications are deployed using context files located in the /var/lib/pki/pki-tomcat/conf/Catalina/localhost/ direcotry, for example, the ca.xml file: The docBase points to the location of the default web application directory, /usr/share/pki/ . To customize the web application, copy the web application directory into the instance's webapps directory: Then change the docBase to point to the custom web application directory relative from the webapps directory: The change will be effective immediately without the need to restart the server. To remove the custom web application, simply revert the docBase and delete the custom web application directory: 14.6.2. Customizing the Web UI Theme The subsystem web applications in the same instance share the same theme, which contains: CSS files, which determine the global appearance Image files including logo, icons, and other Branding properties, which determine the page title, logo link, title color, and other. The Web UI theme is deployed using the pki.xml context file in the /var/lib/pki/pki-tomcat/conf/Catalina/localhost/ directory: The docBase points to the location of the default theme directory, /usr/share/pki/ . To customize the theme, copy the default theme directory into the pki directory in the instance's webapps directory: Then change the docBase to point to the custom theme directory relative from the webapps directory: The change will be effective immediately without the need to restart the server. To remove the custom theme, simply revert the docBase and delete the custom theme directory: 14.6.3. Customizing TPS Token State Labels The default token state labels are stored in the /usr/share/pki/tps/conf/token-states.properties file and described in Section 2.5.2.4.1.4, "Token State and Transition Labels" . To customize the labels, copy the file into the instance directory: The change will be effective immediately without the need to restart the server. To remove the customized labels, simply delete the customized file: | [
"<Context docBase=\"/usr/share/pki/ca/webapps/ca\" crossContext=\"true\" allowLinking=\"true\"> </Context>",
"cp -r /usr/share/pki/ca/webapps/ca /var/lib/pki/pki-tomcat/webapps",
"<Context docBase=\"ca\" crossContext=\"true\" allowLinking=\"true\"> </Context>",
"rm -rf /var/lib/pki/pki-tomcat/webapps/ca",
"<Context docBase=\"/usr/share/pki/common-ui\" crossContext=\"true\" allowLinking=\"true\"> </Context>",
"cp -r /usr/share/pki/common-ui /var/lib/pki/pki-tomcat/webapps/pki",
"<Context docBase=\"pki\" crossContext=\"true\" allowLinking=\"true\"> </Context>",
"rm -rf /var/lib/pki/pki-tomcat/webapps/pki",
"cp /usr/share/pki/tps/conf/token-states.properties /var/lib/pki/pki-tomcat/tps/conf",
"rm /var/lib/pki/pki-tomcat/tps/conf/token-states.properties"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/customizing_web_services |
Chapter 10. Management of Ceph object gateway using the Ceph Orchestrator | Chapter 10. Management of Ceph object gateway using the Ceph Orchestrator As a storage administrator, you can deploy Ceph object gateway using the command line interface or by using the service specification. You can also configure multi-site object gateways, and remove the Ceph object gateway using the Ceph Orchestrator. Cephadm deploys Ceph object gateway as a collection of daemons that manages a single-cluster deployment or a particular realm and zone in a multisite deployment. Note With Cephadm, the object gateway daemons are configured using the monitor configuration database instead of a ceph.conf or the command line. If that configuration is not already in the client.rgw section, then the object gateway daemons will start up with default settings and bind to the port 80 . Note The .default.rgw.buckets.index pool is created only after the bucket is created in Ceph Object Gateway, while the .default.rgw.buckets.data pool is created after the data is uploaded to the bucket. This section covers the following administrative tasks: Deploying the Ceph object gateway using the command line interface . Deploying the Ceph object gateway using the service specification . Deploying a multi-site Ceph object gateway using the Ceph Orchestrator . Removing the Ceph object gateway using the Ceph Orchestrator . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All the managers, monitors, and OSDs are deployed in the storage cluster. 10.1. Deploying the Ceph Object Gateway using the command line interface Using the Ceph Orchestrator, you can deploy the Ceph Object Gateway with the ceph orch command in the command line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example You can deploy the Ceph object gateway daemons in three different ways: Method 1 Create realm, zone group, zone, and then use the placement specification with the host name: Create a realm: Syntax Example Create a zone group: Syntax Example Create a zone: Syntax Example Commit the changes: Syntax Example Run the ceph orch apply command: Syntax Example Method 2 Use an arbitrary service name to deploy two Ceph Object Gateway daemons for a single cluster deployment: Syntax Example Method 3 Use an arbitrary service name on a labeled set of hosts: Syntax Note NUMBER_OF_DAEMONS controls the number of Ceph object gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2. Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 10.2. Deploying the Ceph Object Gateway using the service specification You can deploy the Ceph Object Gateway using the service specification with either the default or the custom realms, zones, and zone groups. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the bootstrapped host. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure As a root user, create a specification file: Example Edit the radosgw.yml file to include the following details for the default realm, zone, and zone group: Syntax Note NUMBER_OF_DAEMONS controls the number of Ceph Object Gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2. Example Optional: For custom realm, zone, and zone group, create the resources and then create the radosgw.yml file: Create the custom realm, zone, and zone group: Example Create the radosgw.yml file with the following details: Example Mount the radosgw.yml file under a directory in the container: Example Note Every time you exit the shell, you have to mount the file in the container before deploying the daemon. Deploy the Ceph Object Gateway using the service specification: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 10.3. Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator Ceph Orchestrator supports multi-site configuration options for the Ceph Object Gateway. You can configure each object gateway to work in an active-active zone configuration allowing writes to a non-primary zone. The multi-site configuration is stored within a container called a realm. The realm stores zone groups, zones, and a time period. The rgw daemons handle the synchronization eliminating the need for a separate synchronization agent, thereby operating with an active-active configuration. You can also deploy multi-site zones using the command line interface (CLI). Note The following configuration assumes at least two Red Hat Ceph Storage clusters are in geographically separate locations. However, the configuration also works on the same site. Prerequisites At least two running Red Hat Ceph Storage clusters. At least two Ceph Object Gateway instances, one for each Red Hat Ceph Storage cluster. Root-level access to all the nodes. Nodes or containers are added to the storage cluster. All Ceph Manager, Monitor and OSD daemons are deployed. Procedure In the cephadm shell, configure the primary zone: Create a realm: Syntax Example If the storage cluster has a single realm, then specify the --default flag. Create a primary zone group: Syntax Example Create a primary zone: Syntax Example Optional: Delete the default zone, zone group, and the associated pools. Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. Also, removing the default zone group deletes the system user. To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Example Create a system user: Syntax Example Make a note of the access_key and secret_key . Add the access key and system key to the primary zone: Syntax Example Commit the changes: Syntax Example Outside the cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example In the Cephadm shell, configure the secondary zone. Pull the primary realm configuration from the host: Syntax Example Pull the primary period configuration from the host: Syntax Example Configure a secondary zone: Syntax Example Optional: Delete the default zone. Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Example Update the Ceph configuration database: Syntax Example Commit the changes: Syntax Example Outside the Cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example Optional: Deploy multi-site Ceph Object Gateways using the placement specification: Syntax Example Verification Check the synchronization status to verify the deployment: Example 10.4. Removing the Ceph Object Gateway using the Ceph Orchestrator You can remove the Ceph object gateway daemons using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one Ceph object gateway daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example List the service: Example Remove the service: Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the Ceph object gateway using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the Ceph object gateway using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. | [
"cephadm shell",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=default --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default",
"radosgw-admin period update --rgw-realm= REALM_NAME --commit",
"radosgw-admin period update --rgw-realm=test_realm --commit",
"ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"",
"ceph orch apply rgw test --realm=test_realm --zone=test_zone --placement=\"2 host01 host02\"",
"ceph orch apply rgw SERVICE_NAME",
"ceph orch apply rgw foo",
"ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000",
"ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"2 label:rgw\" --port=8000",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"touch radosgw.yml",
"service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network",
"service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"radosgw-admin realm create --rgw-realm=test_realm radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone radosgw-admin period update --rgw-realm=test_realm --commit",
"service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i radosgw.yml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default",
"radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system",
"radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system",
"radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --endpoints=http://rgw.example.com:80",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set rgw rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"",
"radosgw-admin sync status",
"cephadm shell",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm rgw.test_realm.test_zone_bb",
"ceph orch ps",
"ceph orch ps"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/operations_guide/management-of-ceph-object-gateway-services-using-the-ceph-orchestrator |
9.2. Deleting or Adding a Node | 9.2. Deleting or Adding a Node This section describes how to delete a node from a cluster and add a node to a cluster. You can delete a node from a cluster according to Section 9.2.1, "Deleting a Node from a Cluster" ; you can add a node to a cluster according to Section 9.2.2, "Adding a Node to a Cluster" . 9.2.1. Deleting a Node from a Cluster Deleting a node from a cluster consists of shutting down the cluster software on the node to be deleted and updating the cluster configuration to reflect the change. Important If deleting a node from the cluster causes a transition from greater than two nodes to two nodes, you must restart the cluster software at each node after updating the cluster configuration file. To delete a node from a cluster, perform the following steps: At any node, use the clusvcadm utility to relocate, migrate, or stop each HA service running on the node that is being deleted from the cluster. For information about using clusvcadm , see Section 9.3, "Managing High-Availability Services" . At the node to be deleted from the cluster, stop the cluster software according to Section 9.1.2, "Stopping Cluster Software" . For example: At any node in the cluster, edit the /etc/cluster/cluster.conf to remove the clusternode section of the node that is to be deleted. For example, in Example 9.1, "Three-node Cluster Configuration" , if node-03.example.com is supposed to be removed, then delete the clusternode section for that node. If removing a node (or nodes) causes the cluster to be a two-node cluster, you can add the following line to the configuration file to allow a single node to maintain quorum (for example, if one node fails): <cman two_node="1" expected_votes="1"/> Refer to Section 9.2.3, "Examples of Three-Node and Two-Node Configurations" for comparison between a three-node and a two-node configuration. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3"> ). Save /etc/cluster/cluster.conf . (Optional) Validate the updated file against the cluster schema ( cluster.rng ) by running the ccs_config_validate command. For example: Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes. Verify that the updated configuration file has been propagated. If the node count of the cluster has transitioned from greater than two nodes to two nodes, you must restart the cluster software as follows: At each node, stop the cluster software according to Section 9.1.2, "Stopping Cluster Software" . For example: At each node, start the cluster software according to Section 9.1.1, "Starting Cluster Software" . For example: At any cluster node, run cman_tool nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example: At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example: | [
"service rgmanager stop Stopping Cluster Service Manager: [ OK ] service gfs2 stop Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ] Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ] service clvmd stop Signaling clvmd to exit [ OK ] clvmd terminated [ OK ] service cman stop Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Waiting for corosync to shutdown: [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ]",
"ccs_config_validate Configuration validates",
"service rgmanager stop Stopping Cluster Service Manager: [ OK ] service gfs2 stop Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ] Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ] service clvmd stop Signaling clvmd to exit [ OK ] clvmd terminated [ OK ] service cman stop Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Waiting for corosync to shutdown: [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ]",
"service cman start Starting cluster: Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] service clvmd start Starting clvmd: [ OK ] Activating VG(s): 2 logical volume(s) in volume group \"vg_example\" now active [ OK ] service gfs2 start Mounting GFS2 filesystem (/mnt/gfsA): [ OK ] Mounting GFS2 filesystem (/mnt/gfsB): [ OK ] service rgmanager start Starting Cluster Service Manager: [ OK ]",
"cman_tool nodes Node Sts Inc Joined Name 1 M 548 2010-09-28 10:52:21 node-01.example.com 2 M 548 2010-09-28 10:52:21 node-02.example.com",
"clustat Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node-02.example.com 2 Online, rgmanager node-01.example.com 1 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:example_apache node-01.example.com started service:example_apache2 (none) disabled"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-admin-manage-nodes-delete-add-cli-ca |
Chapter 2. Dependency management | Chapter 2. Dependency management 2.1. Quarkus tooling for starting a new project A specific Camel Extensions for Quarkus release is supposed to work only with a specific Quarkus release. The easiest and most straightforward way to get the dependency versions right in a new project is to use one of the Quarkus tools: code.quarkus.redhat.com - an online project generator, Quarkus Maven plugin These tools allow you to select extensions and scaffold a new Maven project. The generated pom.xml will look similar to the following: <project> ... <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 2.13.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> ... </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> ... </dependencies> ... </project> Tip The universe of available extensions spans over Quarkus Core, Camel Quarkus and several other third party participating projects, such as Hazelcast, Cassandra, Kogito and OptaPlanner. BOM stands for "Bill of Materials" - it is a pom.xml whose main purpose is to manage the versions of artifacts so that end users importing the BOM in their projects do not need to care which particular versions of the artifacts are supposed to work together. In other words, having a BOM imported in the <depependencyManagement> section of your pom.xml allows you to avoid specifying versions for the dependencies managed by the given BOM. The particular BOMs that are contained in the pom.xml depend on the extensions that you select using the generator tools which are configured to select a minimal set of consistent BOMs. If you choose to add an extension at a later point that is not managed by any of the BOMs in your pom.xml file, you do not need to search for the appropriate BOM manually. With the quarkus-maven-plugin you can select the extension, and the tool adds the appropriate BOM as required. You can also use the quarkus-maven-plugin to upgrade the BOM versions. The com.redhat.quarkus.platform BOMs are aligned with each other which means that if an artifact is managed in more than one BOM, it is always managed with the same version. This has the advantage that application developers do not need to care for the compatibility of the individual artifacts that may come from various independent projects. | [
"<project> <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 2.13.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> </dependencies> </project>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/developing_applications_with_camel_extensions_for_quarkus/dependency_management |
Chapter 3. Shenandoah garbage collector modes | Chapter 3. Shenandoah garbage collector modes You can run Shenandoah in three different modes. Select a specific mode with the -XX:ShenandoahGCMode=<name> . The following list describes each Shenandoah mode: normal/satb (product, default) This mode runs a concurrent garbage collector (GC) with Snapshot-At-The-Beginning (SATB) marking. This marking mode does the similar work as G1, the default garbage collector for Red Hat build of OpenJDK 8. iu (experimental) This mode runs a concurrent GC with Incremental Update (IU) marking. It can reclaim unreachably memory more aggressively. This marking mode mirrors the SATB mode. This may make marking less conservative, especially around accessing weak references. passive (diagnostic) This mode runs Stop the World Event GCs. This mode is used for functional testing, but sometimes it is useful for bisecting performance anomalies with GC barriers, or to ascertain the actual live data size in the application. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk/different-modes-to-run-shenandoah-gc |
Chapter 3. Setting up the environment for an OpenShift installation | Chapter 3. Setting up the environment for an OpenShift installation 3.1. Installing RHEL on the provisioner node With the configuration of the prerequisites complete, the step is to install RHEL 9.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure . Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 3.3. Checking NTP server synchronization The OpenShift Container Platform installation program installs the chrony Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the chrony service. For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the Additional resources section. Prerequisites You installed the chrony package on the target node. Procedure Log in to the node by using the ssh command. View the NTP servers available to the node by running the following command: USD chronyc sources Example output MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms Use the ping command to ensure that the node can access an NTP server, for example: USD ping time.cloudflare.com Example output PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms ... Additional resources Optional: Configuring NTP for disconnected clusters Network Time Protocol (NTP) 3.4. Configuring networking Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a bare-metal bridge and network, and an optional provisioning bridge and network. Note You can also configure networking from the web console. Procedure Export the bare-metal network NIC name by running the following command: USD export PUB_CONN=<baremetal_nic_name> Configure the bare-metal network: Note The SSH connection might disconnect after executing these steps. For a network using DHCP, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " 1 Replace <con_name> with the connection name. For a network using static IP addressing and no DHCP network, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr "x.x.x.x/yy" ipv4.gateway "a.a.a.a" ipv4.dns "b.b.b.b" 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal nmcli con up baremetal " 1 Replace <con_name> with the connection name. Replace x.x.x.x/yy with the IP address and CIDR for the network. Replace a.a.a.a with the network gateway. Replace b.b.b.b with the IP address of the DNS server. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name by running the following command: USD export PROV_CONN=<prov_nic_name> Optional: If you are deploying with a provisioning network, configure the provisioning network by running the following command: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The SSH connection might disconnect after executing these steps. The IPv6 address can be any address that is not routable through the bare-metal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection by running the following command: USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual SSH back into the provisioner node (if required) by running the following command: # ssh kni@provisioner.<cluster-name>.<domain> Verify that the connection bridges have been properly created by running the following command: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 3.5. Establishing communication between subnets In a typical OpenShift Container Platform cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing OpenShift Container Platform to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. Important All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. Deploying a cluster with multiple subnets requires using virtual media. This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet. In this procedure, the cluster spans two subnets: The first subnet ( 10.0.0.0 ) contains the control plane and local worker nodes. The second subnet ( 192.168.0.0 ) contains the edge worker nodes. Procedure Configure the first subnet to communicate with the second subnet: Log in as root to a control plane node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the second subnet ( 192.168.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each control plane node in the first subnet. Note Adjust the commands to match your actual interface names and gateway. Configure the second subnet to communicate with the first subnet: Log in as root to a remote worker node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the first subnet ( 10.0.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully by running the following command: # ip route Repeat the steps for each worker node in the second subnet. Note Adjust the commands to match your actual interface names and gateway. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet by running the following command: USD ping <remote_worker_node_ip_address> If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet by running the following command: USD ping <control_plane_node_ip_address> If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. 3.6. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.14 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 3.7. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 3.8. Optional: Creating an RHCOS images cache To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios. Warning If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the image and place it in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} Confirm SELinux type is of httpd_sys_content_t for the new file: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ 1 -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ registry.access.redhat.com/ubi9/httpd-24 1 Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. Generate the bootstrapOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . See the "Configuring the install-config.yaml file" section for additional details. 3.9. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, NetworkManager sets the hostnames. By default, DHCP provides the hostnames to NetworkManager , which is the recommended method. NetworkManager gets the hostnames through a reverse DNS lookup in the following cases: If DHCP does not provide the hostnames If you use kernel arguments to set the hostnames If you use another method to set the hostnames Reverse DNS lookup occurs after the network has been initialized on a node, and can increase the time it takes NetworkManager to set the hostname. Other system services can start prior to NetworkManager setting the hostname, which can cause those services to use a default hostname such as localhost . Tip You can avoid the delay in setting hostnames by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.10. Configuring the install-config.yaml file 3.10.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey : apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one worker. 2 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare-metal network. 3 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare-metal network. 4 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticDNS configuration setting to specify the DNS address for the bootstrap VM when there is no DHCP server on the bare-metal network. 5 See the BMC addressing sections for more options. 6 To set the path to the installation disk drive, enter the kernel name of the disk. For example, /dev/sda . Important Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. To use the disk WWN, replace the deviceName parameter with the wwnWithExtension parameter. Depending on the parameter that you use, enter either of the following values: The disk name. For example, /dev/sda , or /dev/disk/by-path/ . The disk WWN. For example, "0x64cd98f04fde100024684cf3034da5c2" . Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value. Failure to meet these requirements for the rootDeviceHints parameter might result in the following error: ironic-inspector inspection failed: No disks satisfied root device hints Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file to the new directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 3.10.2. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 3.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticDNS The static network DNS of the bootstrap node. You must set this value when deploying a cluster with static IP addresses when there is no Dynamic Host Configuration Protocol (DHCP) server on the bare-metal network. If you do not set this value, the installation program will use the value from bootstrapExternalStaticGateway , which causes problems when the IP address values of the gateway and DNS are different. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 3.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 3.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 3.10.3. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Redfish APIs Several redfish API endpoints are called onto your BCM when using the bare-metal installer-provisioned infrastructure. Important You need to ensure that your BMC supports all of the redfish APIs before installation. List of redfish APIs Power on curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "On"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Power off curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "ForceOff"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Temporary boot using pxe curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} Set BIOS boot mode using Legacy or UEFI curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} List of redfish-virtualmedia APIs Set temporary boot device using cd or dvd curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' Mount virtual media curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: *" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' Note The PowerOn and PowerOff commands for redfish APIs are the same for the redfish-virtualmedia APIs. Important HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes . 3.10.4. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. Note Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. Note Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 and all releases up to including the 5.xx series for installer-provisioned installations on bare metal deployments. The virtual console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . 3.10.5. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 3.4. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 3.10.6. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 3.5. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 3.10.7. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 3.6. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 3.10.8. Optional: Setting proxy settings To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file. apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Note When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 3.10.9. Optional: Deploying with no provisioning network To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 3.10.10. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important On a bare-metal platform, if you specified an NMState configuration in the networkConfig section of your install-config.yaml file, add interfaces.wait-ip: ipv4+ipv6 to the NMState YAML file to resolve an issue that prevents your cluster from deploying on a dual-stack network. Example NMState YAML configuration file that includes the wait-ip parameter networkConfig: nmstate: interfaces: - name: <interface_name> # ... wait-ip: ipv4+ipv6 # ... To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 3.10.11. Optional: Configuring host network interfaces Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState. The most common use case for this functionality is to specify a static IP address on the bare-metal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings. Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax. Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster. Create an NMState YAML file: interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 4 -hop-interface: <next_hop_nic1_name> 5 1 2 3 4 5 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> Replace <nmstate_yaml_file> with the configuration file name. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 5 -hop-interface: <next_hop_nic1_name> 6 1 Add the NMState YAML syntax to configure the host interfaces. 2 3 4 5 6 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Important After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. 3.10.12. Configuring host network interfaces for subnets For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios. Important When using the default load balancer, OpenShiftManagedDefault and adding remote nodes to your OpenShift Container Platform cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the machineNetwork configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the networkConfig parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures that the remote nodes can reach the subnet containing the control plane and that they can receive network traffic from the control plane. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia or idrac-virtualmedia , because remote nodes cannot access the local provisioning network. Procedure Add the subnets to the machineNetwork in the install-config.yaml file when using static IP addresses: networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes Add the gateway and DNS configuration to the networkConfig parameter of each edge compute node using NMState syntax when using a static IP address or advanced networking such as bonds: networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4 1 Replace <interface_name> with the interface name. 2 Replace <node_ip> with the IP address of the node. 3 Replace <gateway_ip> with the IP address of the gateway. 4 Replace <dns_ip> with the IP address of the DNS server. 3.10.13. Optional: Configuring address generation modes for SLAAC in dual-stack networks For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the ipv6.addr-gen-mode network setting. You can set this value using NMState to configure the ramdisk and the cluster configuration files. If you don't configure a consistent ipv6.addr-gen-mode in these locations, IPv6 address mismatches can occur between CSR resources and BareMetalHost resources in the cluster. Prerequisites Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState YAML syntax with the nmstatectl gc command before including it in the install-config.yaml file because the installation program will not check the NMState YAML syntax. Create an NMState YAML file: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> 1 1 Replace <nmstate_yaml_file> with the name of the test configuration file. Add the NMState configuration to the hosts.networkConfig section within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 ... 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . 3.10.14. Optional: Configuring host network interfaces for dual port NIC Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces by using NMState to support dual port NIC. Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes by using Kubernetes NMState after deployment or when expanding the cluster. Procedure Add the NMState configuration to the networkConfig field to hosts within the install-config.yaml file: hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field has information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates a ethernet interface. 5 Set this to `false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The bond uses the primary device as the first device of the bonding interfaces. The bond does not abandon the primary device interface unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Important After deploying the cluster, you cannot change the networkConfig configuration setting of the install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. Additional resources Configuring network bonding 3.10.15. Configuring multiple cluster nodes You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster. Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND , as shown in the following example: hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND Note Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure. 3.10.16. Optional: Configuring managed Secure Boot You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 3.11. Manifest configuration files 3.11.1. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 3.11.2. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.14.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 3.11.3. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Important When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 3.11.4. Optional: Deploying routers on worker nodes During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Important Deploying a cluster with only one worker node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one worker, the cluster loses high availability for the ingress API, which is not suitable for production environments. Note By default, the installer deploys two routers. If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. Procedure Create a router-replicas.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one worker node, set replicas: to 1 . If working with more than 3 worker nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory: USD cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 3.11.5. Optional: Configuring the BIOS The following procedure configures the BIOS during the installation process. Procedure Create the manifests. Modify the BareMetalHost resource file corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Add the BIOS configuration to the spec section of the BareMetalHost resource: spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true Note Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported. Create the cluster. Additional resources Bare metal configuration 3.11.6. Optional: Configuring the RAID The following procedure configures a redundant array of independent disks (RAID) during the installation process. Note OpenShift Container Platform supports hardware RAID for baseboard management controllers (BMCs) using the iRMC protocol only. OpenShift Container Platform 4.14 does not support software RAID. If you want to configure a hardware RAID for the node, verify that the node has a RAID controller. Procedure Create the manifests. Modify the BareMetalHost resource corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Note The following example uses a hardware RAID configuration because OpenShift Container Platform 4.14 does not support software RAID. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example: spec: raid: hardwareRAIDVolumes: - level: "0" 1 name: "sda" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0 1 level is a required field, and the others are optional fields. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example: spec: raid: hardwareRAIDVolumes: [] If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed. Create the cluster. 3.11.7. Optional: Configuring storage on nodes You can make changes to operating systems on OpenShift Container Platform nodes by creating MachineConfig objects that are managed by the Machine Config Operator (MCO). The MachineConfig specification includes an ignition config for configuring the machines at first boot. This config object can be used to modify files, systemd services, and other operating system features running on OpenShift Container Platform machines. Procedure Use the ignition config to configure storage on nodes. The following MachineSet manifest example demonstrates how to add a partition to a device on a primary node. In this example, apply the manifest before installation to have a partition named recovery with a size of 16 GiB on the primary node. Create a custom-partitions.yaml file and include a MachineConfig object that contains your partition layout: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs Save and copy the custom-partitions.yaml file to the clusterconfigs/openshift directory: USD cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift Additional resources Bare metal configuration Partition naming scheme 3.12. Creating a disconnected registry In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections. Prerequisites If you have already prepared a mirror registry for Mirroring images for a disconnected installation , you can skip directly to Modify the install-config.yaml file to use the disconnected registry . 3.12.1. Preparing the registry node to host the mirrored registry The following steps must be completed prior to hosting a mirrored registry on bare metal. Procedure Open the firewall port on the registry node: USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node: USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held: USD sudo mkdir -p /opt/registry/{auth,certs,data} 3.12.2. Mirroring the OpenShift Container Platform image repository for a disconnected registry Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. Procedure Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-baremetal-install 3.12.3. Modify the install-config.yaml file to use the disconnected registry On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file: USD echo "additionalTrustBundle: |" >> install-config.yaml The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file: USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml 3.13. Validation checklist for installation ❏ OpenShift Container Platform installer has been retrieved. ❏ OpenShift Container Platform installer has been extracted. ❏ Required parameters for the install-config.yaml have been configured. ❏ The hosts parameter for the install-config.yaml has been configured. ❏ The bmc parameter for the install-config.yaml has been configured. ❏ Conventions for the values configured in the bmc address field have been applied. ❏ Created the OpenShift Container Platform manifests. ❏ (Optional) Deployed routers on worker nodes. ❏ (Optional) Created a disconnected registry. ❏ (Optional) Validate disconnected registry settings if in use. | [
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_worker_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.14",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: *\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.14.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installation-workflow |
3.6. Considerations for Configuring HA Services | 3.6. Considerations for Configuring HA Services You can create a cluster to suit your needs for high availability by configuring HA (high-availability) services. The key component for HA service management in the Red Hat High Availability Add-On, rgmanager , implements cold failover for off-the-shelf applications. In the Red Hat High Availability Add-On, an application is configured with other cluster resources to form an HA service that can fail over from one cluster node to another with no apparent interruption to cluster clients. HA-service failover can occur if a cluster node fails or if a cluster system administrator moves the service from one cluster node to another (for example, for a planned outage of a cluster node). To create an HA service, you must configure it in the cluster configuration file. An HA service comprises cluster resources . Cluster resources are building blocks that you create and manage in the cluster configuration file - for example, an IP address, an application initialization script, or a Red Hat GFS2 shared partition. To ensure data integrity, only one node can run a cluster service and access cluster-service data at a time. You can specify failover priority in a failover domain. Specifying failover priority consists of assigning a priority level to each node in a failover domain. The priority level determines the failover order - determining which node that an HA service should fail over to. If you do not specify failover priority, an HA service can fail over to any node in its failover domain. Also, you can specify if an HA service is restricted to run only on nodes of its associated failover domain. When associated with an unrestricted failover domain, an HA service can start on any cluster node in the event no member of the failover domain is available. Figure 3.1, "Web Server Cluster Service Example" shows an example of an HA service that is a web server named "content-webserver". It is running in cluster node B and is in a failover domain that consists of nodes A, B, and D. In addition, the failover domain is configured with a failover priority to fail over to node D before node A and to restrict failover to nodes only in that failover domain. The HA service comprises these cluster resources: IP address resource - IP address 10.10.10.201. An application resource named "httpd-content" - a web server application init script /etc/init.d/httpd (specifying httpd ). A file system resource - Red Hat GFS2 named "gfs2-content-webserver". Figure 3.1. Web Server Cluster Service Example Clients access the HA service through the IP address 10.10.10.201, enabling interaction with the web server application, httpd-content. The httpd-content application uses the gfs2-content-webserver file system. If node B were to fail, the content-webserver HA service would fail over to node D. If node D were not available or also failed, the service would fail over to node A. Failover would occur with minimal service interruption to the cluster clients. For example, in an HTTP service, certain state information may be lost (like session data). The HA service would be accessible from another cluster node by means of the same IP address as it was before failover. Note For more information about HA services and failover domains, see the High Availability Add-On Overview . For information about configuring failover domains, see Chapter 4, Configuring Red Hat High Availability Add-On With Conga (using Conga ) or Chapter 8, Configuring Red Hat High Availability Manually (using command line utilities). An HA service is a group of cluster resources configured into a coherent entity that provides specialized services to clients. An HA service is represented as a resource tree in the cluster configuration file, /etc/cluster/cluster.conf (in each cluster node). In the cluster configuration file, each resource tree is an XML representation that specifies each resource, its attributes, and its relationship among other resources in the resource tree (parent, child, and sibling relationships). Note Because an HA service consists of resources organized into a hierarchical tree, a service is sometimes referred to as a resource tree or resource group . Both phrases are synonymous with HA service . At the root of each resource tree is a special type of resource - a service resource . Other types of resources comprise the rest of a service, determining its characteristics. Configuring an HA service consists of creating a service resource, creating subordinate cluster resources, and organizing them into a coherent entity that conforms to hierarchical restrictions of the service. There are two major considerations to take into account when configuring an HA service: The types of resources needed to create a service Parent, child, and sibling relationships among resources The types of resources and the hierarchy of resources depend on the type of service you are configuring. The types of cluster resources are listed in Appendix B, HA Resource Parameters . Information about parent, child, and sibling relationships among resources is described in Appendix C, HA Resource Behavior . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clust-svc-ov-ca |
Chapter 3. Special Resource Operator | Chapter 3. Special Resource Operator Learn about the Special Resource Operator (SRO) and how you can use it to build and manage driver containers for loading kernel modules and device drivers on nodes in an OpenShift Container Platform cluster. Important The Special Resource Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1. About the Special Resource Operator The Special Resource Operator (SRO) helps you manage the deployment of kernel modules and drivers on an existing OpenShift Container Platform cluster. The SRO can be used for a case as simple as building and loading a single kernel module, or as complex as deploying the driver, device plugin, and monitoring stack for a hardware accelerator. For loading kernel modules, the SRO is designed around the use of driver containers. Driver containers are increasingly being used in cloud-native environments, especially when run on pure container operating systems, to deliver hardware drivers to the host. Driver containers extend the kernel stack beyond the out-of-the-box software and hardware features of a specific kernel. Driver containers work on various container-capable Linux distributions. With driver containers, the host operating system stays clean and there is no clash between different library versions or binaries on the host. Note The functions described require a connected environment with a constant connection to the network. These functions are not available for disconnected environments. 3.2. Installing the Special Resource Operator As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift CLI or the web console. 3.2.1. Installing the Special Resource Operator by using the CLI As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift CLI. Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Install the SRO in the openshift-operators namespace: Create the following Subscription CR and save the YAML in the sro-sub.yaml file: Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-special-resource-operator namespace: openshift-operators spec: channel: "stable" installPlanApproval: Automatic name: openshift-special-resource-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription object by running the following command: USD oc create -f sro-sub.yaml Switch to the openshift-operators project: USD oc project openshift-operators Verification To verify that the Operator deployment is successful, run: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f4c5f5778-4lvvk 2/2 Running 0 89s special-resource-controller-manager-6dbf7d4f6f-9kl8h 2/2 Running 0 81s A successful deployment shows a Running status. 3.2.2. Installing the Special Resource Operator by using the web console As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Install the Special Resource Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Special Resource Operator from the list of available Operators, and then click Install . On the Install Operator page, select a specific namespace on the cluster , select the namespace created in the section, and then click Install . Verification To verify that the Special Resource Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Special Resource Operator is listed in the openshift-operators project with a Status of InstallSucceeded . Note During installation, an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-operators project. 3.3. Using the Special Resource Operator The Special Resource Operator (SRO) is used to manage the build and deployment of a driver container. The objects required to build and deploy the container can be defined in a Helm chart. The example in this section uses the simple-kmod SpecialResource object to point to a ConfigMap object that is created to store the Helm charts. 3.3.1. Building and running the simple-kmod SpecialResource by using a config map In this example, the simple-kmod kernel module shows how the Special Resource Operator (SRO) manages a driver container. The container is defined in the Helm chart templates that are stored in a config map. Prerequisites You have a running OpenShift Container Platform cluster. You set the Image Registry Operator state to Managed for your cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. You installed the Node Feature Discovery (NFD) Operator. You installed the SRO. You installed the Helm CLI ( helm ). Procedure To create a simple-kmod SpecialResource object, define an image stream and build config to build the image, and a service account, role, role binding, and daemon set to run the container. The service account, role, and role binding are required to run the daemon set with the privileged security context so that the kernel module can be loaded. Create a templates directory, and change into it: USD mkdir -p chart/simple-kmod-0.0.1/templates USD cd chart/simple-kmod-0.0.1/templates Save this YAML template for the image stream and build config in the templates directory as 0000-buildconfig.yaml : apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} 1 name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} 2 spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} 3 name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} 4 annotations: specialresource.openshift.io/wait: "true" specialresource.openshift.io/driver-container-vendor: simple-kmod specialresource.openshift.io/kernel-affine: "true" spec: nodeSelector: node-role.kubernetes.io/worker: "" runPolicy: "Serial" triggers: - type: "ConfigChange" - type: "ImageChange" source: git: ref: {{.Values.specialresource.spec.driverContainer.source.git.ref}} uri: {{.Values.specialresource.spec.driverContainer.source.git.uri}} type: Git strategy: dockerStrategy: dockerfilePath: Dockerfile.SRO buildArgs: - name: "IMAGE" value: {{ .Values.driverToolkitImage }} {{- range USDarg := .Values.buildArgs }} - name: {{ USDarg.name }} value: {{ USDarg.value }} {{- end }} - name: KVER value: {{ .Values.kernelFullVersion }} output: to: kind: ImageStreamTag name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} 5 1 2 3 4 5 The templates such as {{.Values.specialresource.metadata.name}} are filled in by the SRO, based on fields in the SpecialResource CR and variables known to the Operator such as {{.Values.KernelFullVersion}} . Save the following YAML template for the RBAC resources and daemon set in the templates directory as 1000-driver-container.yaml : apiVersion: v1 kind: ServiceAccount metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} subjects: - kind: ServiceAccount name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} namespace: {{.Values.specialresource.spec.namespace}} --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} annotations: specialresource.openshift.io/wait: "true" specialresource.openshift.io/state: "driver-container" specialresource.openshift.io/driver-container-vendor: simple-kmod specialresource.openshift.io/kernel-affine: "true" specialresource.openshift.io/from-configmap: "true" spec: updateStrategy: type: OnDelete selector: matchLabels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} template: metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} spec: priorityClassName: system-node-critical serviceAccount: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} serviceAccountName: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} containers: - image: image-registry.openshift-image-registry.svc:5000/{{.Values.specialresource.spec.namespace}}/{{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} imagePullPolicy: Always command: ["/sbin/init"] lifecycle: preStop: exec: command: ["/bin/sh", "-c", "systemctl stop kmods-via-containers@{{.Values.specialresource.metadata.name}}"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: "" feature.node.kubernetes.io/kernel-version.full: "{{.Values.KernelFullVersion}}" Change into the chart/simple-kmod-0.0.1 directory: USD cd .. Save the following YAML for the chart as Chart.yaml in the chart/simple-kmod-0.0.1 directory: apiVersion: v2 name: simple-kmod description: Simple kmod will deploy a simple kmod driver-container icon: https://avatars.githubusercontent.com/u/55542927 type: application version: 0.0.1 appVersion: 1.0.0 From the chart directory, create the chart using the helm package command: USD helm package simple-kmod-0.0.1/ Example output Successfully packaged chart and saved it to: /data/<username>/git/<github_username>/special-resource-operator/yaml-for-docs/chart/simple-kmod-0.0.1/simple-kmod-0.0.1.tgz Create a config map to store the chart files: Create a directory for the config map files: USD mkdir cm Copy the Helm chart into the cm directory: USD cp simple-kmod-0.0.1.tgz cm/simple-kmod-0.0.1.tgz Create an index file specifying the Helm repo that contains the Helm chart: USD helm repo index cm --url=cm://simple-kmod/simple-kmod-chart Create a namespace for the objects defined in the Helm chart: USD oc create namespace simple-kmod Create the config map object: USD oc create cm simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/simple-kmod-0.0.1.tgz -n simple-kmod Use the following SpecialResource manifest to deploy the simple-kmod object using the Helm chart that you created in the config map. Save this YAML as simple-kmod-configmap.yaml : apiVersion: sro.openshift.io/v1beta1 kind: SpecialResource metadata: name: simple-kmod spec: #debug: true 1 namespace: simple-kmod chart: name: simple-kmod version: 0.0.1 repository: name: example url: cm://simple-kmod/simple-kmod-chart 2 set: kind: Values apiVersion: sro.openshift.io/v1beta1 kmodNames: ["simple-kmod", "simple-procfs-kmod"] buildArgs: - name: "KMODVER" value: "SRO" driverContainer: source: git: ref: "master" uri: "https://github.com/openshift-psap/kvc-simple-kmod.git" 1 Optional: Uncomment the #debug: true line to have the YAML files in the chart printed in full in the Operator logs and to verify that the logs are created and templated properly. 2 The spec.chart.repository.url field tells the SRO to look for the chart in a config map. From a command line, create the SpecialResource file: USD oc create -f simple-kmod-configmap.yaml Note To remove the simple-kmod kernel module from the node, delete the simple-kmod SpecialResource API object using the oc delete command. The kernel module is unloaded when the driver container pod is deleted. Verification The simple-kmod resources are deployed in the simple-kmod namespace as specified in the object manifest. After a short time, the build pod for the simple-kmod driver container starts running. The build completes after a few minutes, and then the driver container pods start running. Use oc get pods command to display the status of the build pods: USD oc get pods -n simple-kmod Example output NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-12813789169ac0ee-1-build 0/1 Completed 0 7m12s simple-kmod-driver-container-12813789169ac0ee-mjsnh 1/1 Running 0 8m2s simple-kmod-driver-container-12813789169ac0ee-qtkff 1/1 Running 0 8m2s Use the oc logs command, along with the build pod name obtained from the oc get pods command above, to display the logs of the simple-kmod driver container image build: USD oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod To verify that the simple-kmod kernel modules are loaded, execute the lsmod command in one of the driver container pods that was returned from the oc get pods command above: USD oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Tip The sro_kind_completed_info SRO Prometheus metric provides information about the status of the different objects being deployed, which can be useful to troubleshoot SRO CR installations. The SRO also provides other types of metrics that you can use to watch the health of your environment. 3.3.2. Building and running the simple-kmod SpecialResource for a hub-and-spoke topology You can use the Special Resource Operator (SRO) on a hub-and-spoke deployment in which Red Hat Advanced Cluster Management (RHACM) connects a hub cluster to one or more managed clusters. This example procedure shows how the SRO builds driver containers in the hub. The SRO watches hub cluster resources to identify OpenShift Container Platform versions for the helm charts that it uses to create resources which it delivers to spokes. Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. You installed the SRO. You installed the Helm CLI ( helm ). You installed Red Hat Advanced Cluster Management (RHACM). You configured a container registry. Procedure Create a templates directory by running the following command: USD mkdir -p charts/acm-simple-kmod-0.0.1/templates Change to the templates directory by running the following command: USD cd charts/acm-simple-kmod-0.0.1/templates Create templates files for the BuildConfig , Policy , and PlacementRule resources. Save this YAML template for the image stream and build config in the templates directory as 0001-buildconfig.yaml . apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} annotations: specialresource.openshift.io/wait: "true" spec: nodeSelector: node-role.kubernetes.io/worker: "" runPolicy: "Serial" triggers: - type: "ConfigChange" - type: "ImageChange" source: dockerfile: | FROM {{ .Values.driverToolkitImage }} as builder WORKDIR /build/ RUN git clone -b {{.Values.specialResourceModule.spec.set.git.ref}} {{.Values.specialResourceModule.spec.set.git.uri}} WORKDIR /build/simple-kmod RUN make all install KVER={{ .Values.kernelFullVersion }} FROM registry.redhat.io/ubi8/ubi-minimal RUN microdnf -y install kmod COPY --from=builder /etc/driver-toolkit-release.json /etc/ COPY --from=builder /lib/modules/{{ .Values.kernelFullVersion }}/* /lib/modules/{{ .Values.kernelFullVersion }}/ strategy: dockerStrategy: dockerfilePath: Dockerfile.SRO buildArgs: - name: "IMAGE" value: {{ .Values.driverToolkitImage }} {{- range USDarg := .Values.buildArgs }} - name: {{ USDarg.name }} value: {{ USDarg.value }} {{- end }} - name: KVER value: {{ .Values.kernelFullVersion }} output: to: kind: DockerImage name: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}} Save this YAML template for the ACM policy in the templates directory as 0002-policy.yaml . apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-{{.Values.specialResourceModule.metadata.name}}-ds annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST-CSF spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: config-{{.Values.specialResourceModule.metadata.name}}-ds spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: {{.Values.specialResourceModule.spec.namespace}} - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: ServiceAccount metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: {{.Values.specialResourceModule.metadata.name}} subjects: - kind: ServiceAccount name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} - complianceType: musthave objectDefinition: apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} namespace: {{.Values.specialResourceModule.spec.namespace}} spec: updateStrategy: type: OnDelete selector: matchLabels: app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} template: metadata: labels: app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }} spec: priorityClassName: system-node-critical serviceAccount: {{.Values.specialResourceModule.metadata.name}} serviceAccountName: {{.Values.specialResourceModule.metadata.name}} containers: - image: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}} name: {{.Values.specialResourceModule.metadata.name}} imagePullPolicy: Always command: [sleep, infinity] lifecycle: preStop: exec: command: ["modprobe", "-r", "-a" , "simple-kmod", "simple-procfs-kmod"] securityContext: privileged: true Save this YAML template for the placement of policies in the templates directory as 0003-policy.yaml . apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: {{.Values.specialResourceModule.metadata.name}}-placement spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: name operator: NotIn values: - local-cluster --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: {{.Values.specialResourceModule.metadata.name}}-binding placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: {{.Values.specialResourceModule.metadata.name}}-placement subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-{{.Values.specialResourceModule.metadata.name}}-ds Change into the charts/acm-simple-kmod-0.0.1 directory by running the following command: cd .. Save the following YAML template for the chart as Chart.yaml in the charts/acm-simple-kmod-0.0.1 directory: apiVersion: v2 name: acm-simple-kmod description: Build ACM enabled simple-kmod driver with SpecialResourceOperator icon: https://avatars.githubusercontent.com/u/55542927 type: application version: 0.0.1 appVersion: 1.6.4 From the charts directory, create the chart using the command: USD helm package acm-simple-kmod-0.0.1/ Example output Successfully packaged chart and saved it to: <directory>/charts/acm-simple-kmod-0.0.1.tgz Create a config map to store the chart files. Create a directory for the config map files by running the following command: USD mkdir cm Copy the Helm chart into the cm directory by running the following command: USD cp acm-simple-kmod-0.0.1.tgz cm/acm-simple-kmod-0.0.1.tgz Create an index file specifying the Helm repository that contains the Helm chart by running the following command: USD helm repo index cm --url=cm://acm-simple-kmod/acm-simple-kmod-chart Create a namespace for the objects defined in the Helm chart by running the following command: USD oc create namespace acm-simple-kmod Create the config map object by running the following command: USD oc create cm acm-simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/acm-simple-kmod-0.0.1.tgz -n acm-simple-kmod Use the following SpecialResourceModule manifest to deploy the simple-kmod object using the Helm chart that you created in the config map. Save this YAML file as acm-simple-kmod.yaml : apiVersion: sro.openshift.io/v1beta1 kind: SpecialResourceModule metadata: name: acm-simple-kmod spec: namespace: acm-simple-kmod chart: name: acm-simple-kmod version: 0.0.1 repository: name: acm-simple-kmod url: cm://acm-simple-kmod/acm-simple-kmod-chart set: kind: Values apiVersion: sro.openshift.io/v1beta1 buildArgs: - name: "KMODVER" value: "SRO" registry: <your_registry> 1 git: ref: master uri: https://github.com/openshift-psap/kvc-simple-kmod.git watch: - path: "USD.metadata.labels.openshiftVersion" apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster name: spoke1 1 Specify the URL for a registry that you have configured. Create the special resource module by running the following command: USD oc apply -f charts/examples/acm-simple-kmod.yaml Verification Check the status of the build pods by running the following command: USD KUBECONFIG=~/hub/auth/kubeconfig oc get pod -n acm-simple-kmod Example output NAME READY STATUS RESTARTS AGE acm-simple-kmod-4-18-0-305-34-2-el8-4-x86-64-1-build 0/1 Completed 0 42m Check that the policies have been created by running the following command: USD KUBECONFIG=~/hub/auth/kubeconfig oc get placementrules,placementbindings,policies -n acm-simple-kmod Example output NAME AGE REPLICAS placementrule.apps.open-cluster-management.io/acm-simple-kmod-placement 40m NAME AGE placementbinding.policy.open-cluster-management.io/acm-simple-kmod-binding 40m NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy.policy.open-cluster-management.io/policy-acm-simple-kmod-ds enforce Compliant 40m Check that the resources have been reconciled by running the following command: USD KUBECONFIG=~/hub/auth/kubeconfig oc get specialresourcemodule acm-simple-kmod -o json | jq -r '.status' Example output { "versions": { "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3330ef5a178435721ff4efdde762261a9c55212e9b4534385e04037693fbe4": { "complete": true } } } Check that the resources are running in the spoke by running the following command: USD KUBECONFIG=~/spoke1/kubeconfig oc get ds,pod -n acm-simple-kmod Example output AME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64 3 3 3 3 3 <none> 26m NAME READY STATUS RESTARTS AGE pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-brw78 1/1 Running 0 26m pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-fqh5h 1/1 Running 0 26m pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-m9sfd 1/1 Running 0 26m 3.4. Prometheus Special Resource Operator metrics The Special Resource Operator (SRO) exposes the following Prometheus metrics through the metrics service: Metric Name Description sro_used_nodes Returns the nodes that are running pods created by a SRO custom resource (CR). This metric is available for DaemonSet and Deployment objects only. sro_kind_completed_info Represents whether a kind of an object defined by the Helm Charts in a SRO CR has been successfully uploaded in the cluster (value 1 ) or not (value 0 ). Examples of objects are DaemonSet , Deployment or BuildConfig . sro_states_completed_info Represents whether the SRO has finished processing a CR successfully (value 1 ) or the SRO has not processed the CR yet (value 0 ). sro_managed_resources_total Returns the number of SRO CRs in the cluster, regardless of their state. 3.5. Additional resources For information about restoring the Image Registry Operator state before using the Special Resource Operator, see Image registry removed during installation . For details about installing the NFD Operator see Node Feature Discovery (NFD) Operator . For information about updating a cluster that includes the Special Resource Operator, see Updating a cluster that includes the Special Resource Operator . | [
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-special-resource-operator namespace: openshift-operators spec: channel: \"stable\" installPlanApproval: Automatic name: openshift-special-resource-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f sro-sub.yaml",
"oc project openshift-operators",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f4c5f5778-4lvvk 2/2 Running 0 89s special-resource-controller-manager-6dbf7d4f6f-9kl8h 2/2 Running 0 81s",
"mkdir -p chart/simple-kmod-0.0.1/templates",
"cd chart/simple-kmod-0.0.1/templates",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} 1 name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} 2 spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} 3 name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} 4 annotations: specialresource.openshift.io/wait: \"true\" specialresource.openshift.io/driver-container-vendor: simple-kmod specialresource.openshift.io/kernel-affine: \"true\" spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: git: ref: {{.Values.specialresource.spec.driverContainer.source.git.ref}} uri: {{.Values.specialresource.spec.driverContainer.source.git.uri}} type: Git strategy: dockerStrategy: dockerfilePath: Dockerfile.SRO buildArgs: - name: \"IMAGE\" value: {{ .Values.driverToolkitImage }} {{- range USDarg := .Values.buildArgs }} - name: {{ USDarg.name }} value: {{ USDarg.value }} {{- end }} - name: KVER value: {{ .Values.kernelFullVersion }} output: to: kind: ImageStreamTag name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} 5",
"apiVersion: v1 kind: ServiceAccount metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} subjects: - kind: ServiceAccount name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} namespace: {{.Values.specialresource.spec.namespace}} --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} annotations: specialresource.openshift.io/wait: \"true\" specialresource.openshift.io/state: \"driver-container\" specialresource.openshift.io/driver-container-vendor: simple-kmod specialresource.openshift.io/kernel-affine: \"true\" specialresource.openshift.io/from-configmap: \"true\" spec: updateStrategy: type: OnDelete selector: matchLabels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} template: metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} spec: priorityClassName: system-node-critical serviceAccount: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} serviceAccountName: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} containers: - image: image-registry.openshift-image-registry.svc:5000/{{.Values.specialresource.spec.namespace}}/{{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} imagePullPolicy: Always command: [\"/sbin/init\"] lifecycle: preStop: exec: command: [\"/bin/sh\", \"-c\", \"systemctl stop kmods-via-containers@{{.Values.specialresource.metadata.name}}\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\" feature.node.kubernetes.io/kernel-version.full: \"{{.Values.KernelFullVersion}}\"",
"cd ..",
"apiVersion: v2 name: simple-kmod description: Simple kmod will deploy a simple kmod driver-container icon: https://avatars.githubusercontent.com/u/55542927 type: application version: 0.0.1 appVersion: 1.0.0",
"helm package simple-kmod-0.0.1/",
"Successfully packaged chart and saved it to: /data/<username>/git/<github_username>/special-resource-operator/yaml-for-docs/chart/simple-kmod-0.0.1/simple-kmod-0.0.1.tgz",
"mkdir cm",
"cp simple-kmod-0.0.1.tgz cm/simple-kmod-0.0.1.tgz",
"helm repo index cm --url=cm://simple-kmod/simple-kmod-chart",
"oc create namespace simple-kmod",
"oc create cm simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/simple-kmod-0.0.1.tgz -n simple-kmod",
"apiVersion: sro.openshift.io/v1beta1 kind: SpecialResource metadata: name: simple-kmod spec: #debug: true 1 namespace: simple-kmod chart: name: simple-kmod version: 0.0.1 repository: name: example url: cm://simple-kmod/simple-kmod-chart 2 set: kind: Values apiVersion: sro.openshift.io/v1beta1 kmodNames: [\"simple-kmod\", \"simple-procfs-kmod\"] buildArgs: - name: \"KMODVER\" value: \"SRO\" driverContainer: source: git: ref: \"master\" uri: \"https://github.com/openshift-psap/kvc-simple-kmod.git\"",
"oc create -f simple-kmod-configmap.yaml",
"oc get pods -n simple-kmod",
"NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-12813789169ac0ee-1-build 0/1 Completed 0 7m12s simple-kmod-driver-container-12813789169ac0ee-mjsnh 1/1 Running 0 8m2s simple-kmod-driver-container-12813789169ac0ee-qtkff 1/1 Running 0 8m2s",
"oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod",
"oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"mkdir -p charts/acm-simple-kmod-0.0.1/templates",
"cd charts/acm-simple-kmod-0.0.1/templates",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: {{ printf \"%s-%s\" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace \".\" \"-\" | replace \"_\" \"-\" | trunc 63 }} name: {{ printf \"%s-%s\" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace \".\" \"-\" | replace \"_\" \"-\" | trunc 63 }} annotations: specialresource.openshift.io/wait: \"true\" spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: dockerfile: | FROM {{ .Values.driverToolkitImage }} as builder WORKDIR /build/ RUN git clone -b {{.Values.specialResourceModule.spec.set.git.ref}} {{.Values.specialResourceModule.spec.set.git.uri}} WORKDIR /build/simple-kmod RUN make all install KVER={{ .Values.kernelFullVersion }} FROM registry.redhat.io/ubi8/ubi-minimal RUN microdnf -y install kmod COPY --from=builder /etc/driver-toolkit-release.json /etc/ COPY --from=builder /lib/modules/{{ .Values.kernelFullVersion }}/* /lib/modules/{{ .Values.kernelFullVersion }}/ strategy: dockerStrategy: dockerfilePath: Dockerfile.SRO buildArgs: - name: \"IMAGE\" value: {{ .Values.driverToolkitImage }} {{- range USDarg := .Values.buildArgs }} - name: {{ USDarg.name }} value: {{ USDarg.value }} {{- end }} - name: KVER value: {{ .Values.kernelFullVersion }} output: to: kind: DockerImage name: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}}",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-{{.Values.specialResourceModule.metadata.name}}-ds annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST-CSF spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: config-{{.Values.specialResourceModule.metadata.name}}-ds spec: remediationAction: enforce severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: {{.Values.specialResourceModule.spec.namespace}} - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: ServiceAccount metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: {{.Values.specialResourceModule.metadata.name}} subjects: - kind: ServiceAccount name: {{.Values.specialResourceModule.metadata.name}} namespace: {{.Values.specialResourceModule.spec.namespace}} - complianceType: musthave objectDefinition: apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: {{ printf \"%s-%s\" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace \".\" \"-\" | replace \"_\" \"-\" | trunc 63 }} name: {{ printf \"%s-%s\" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace \".\" \"-\" | replace \"_\" \"-\" | trunc 63 }} namespace: {{.Values.specialResourceModule.spec.namespace}} spec: updateStrategy: type: OnDelete selector: matchLabels: app: {{ printf \"%s-%s\" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace \".\" \"-\" | replace \"_\" \"-\" | trunc 63 }} template: metadata: labels: app: {{ printf \"%s-%s\" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace \".\" \"-\" | replace \"_\" \"-\" | trunc 63 }} spec: priorityClassName: system-node-critical serviceAccount: {{.Values.specialResourceModule.metadata.name}} serviceAccountName: {{.Values.specialResourceModule.metadata.name}} containers: - image: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}} name: {{.Values.specialResourceModule.metadata.name}} imagePullPolicy: Always command: [sleep, infinity] lifecycle: preStop: exec: command: [\"modprobe\", \"-r\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] securityContext: privileged: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: {{.Values.specialResourceModule.metadata.name}}-placement spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: name operator: NotIn values: - local-cluster --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: {{.Values.specialResourceModule.metadata.name}}-binding placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: {{.Values.specialResourceModule.metadata.name}}-placement subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-{{.Values.specialResourceModule.metadata.name}}-ds",
"cd ..",
"apiVersion: v2 name: acm-simple-kmod description: Build ACM enabled simple-kmod driver with SpecialResourceOperator icon: https://avatars.githubusercontent.com/u/55542927 type: application version: 0.0.1 appVersion: 1.6.4",
"helm package acm-simple-kmod-0.0.1/",
"Successfully packaged chart and saved it to: <directory>/charts/acm-simple-kmod-0.0.1.tgz",
"mkdir cm",
"cp acm-simple-kmod-0.0.1.tgz cm/acm-simple-kmod-0.0.1.tgz",
"helm repo index cm --url=cm://acm-simple-kmod/acm-simple-kmod-chart",
"oc create namespace acm-simple-kmod",
"oc create cm acm-simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/acm-simple-kmod-0.0.1.tgz -n acm-simple-kmod",
"apiVersion: sro.openshift.io/v1beta1 kind: SpecialResourceModule metadata: name: acm-simple-kmod spec: namespace: acm-simple-kmod chart: name: acm-simple-kmod version: 0.0.1 repository: name: acm-simple-kmod url: cm://acm-simple-kmod/acm-simple-kmod-chart set: kind: Values apiVersion: sro.openshift.io/v1beta1 buildArgs: - name: \"KMODVER\" value: \"SRO\" registry: <your_registry> 1 git: ref: master uri: https://github.com/openshift-psap/kvc-simple-kmod.git watch: - path: \"USD.metadata.labels.openshiftVersion\" apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster name: spoke1",
"oc apply -f charts/examples/acm-simple-kmod.yaml",
"KUBECONFIG=~/hub/auth/kubeconfig oc get pod -n acm-simple-kmod",
"NAME READY STATUS RESTARTS AGE acm-simple-kmod-4-18-0-305-34-2-el8-4-x86-64-1-build 0/1 Completed 0 42m",
"KUBECONFIG=~/hub/auth/kubeconfig oc get placementrules,placementbindings,policies -n acm-simple-kmod",
"NAME AGE REPLICAS placementrule.apps.open-cluster-management.io/acm-simple-kmod-placement 40m NAME AGE placementbinding.policy.open-cluster-management.io/acm-simple-kmod-binding 40m NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy.policy.open-cluster-management.io/policy-acm-simple-kmod-ds enforce Compliant 40m",
"KUBECONFIG=~/hub/auth/kubeconfig oc get specialresourcemodule acm-simple-kmod -o json | jq -r '.status'",
"{ \"versions\": { \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3330ef5a178435721ff4efdde762261a9c55212e9b4534385e04037693fbe4\": { \"complete\": true } } }",
"KUBECONFIG=~/spoke1/kubeconfig oc get ds,pod -n acm-simple-kmod",
"AME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64 3 3 3 3 3 <none> 26m NAME READY STATUS RESTARTS AGE pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-brw78 1/1 Running 0 26m pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-fqh5h 1/1 Running 0 26m pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-m9sfd 1/1 Running 0 26m"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/specialized_hardware_and_driver_enablement/special-resource-operator |
Architecture | Architecture Red Hat OpenShift Service on AWS 4 Architecture overview. Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/architecture/index |
Migrating applications to Red Hat build of Quarkus 3.15 | Migrating applications to Red Hat build of Quarkus 3.15 Red Hat build of Quarkus 3.15 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/migrating_applications_to_red_hat_build_of_quarkus_3.15/index |
Chapter 6. Installing the load balancer | Chapter 6. Installing the load balancer The following example provides general guidance for configuring an HAProxy load balancer using Red Hat Enterprise Linux 8 server. However, you can install any suitable load balancing software solution that supports TCP forwarding. Procedure Install HAProxy: Install the following package that includes the semanage tool: Configure SELinux to allow HAProxy to bind any port: Configure the load balancer to balance the network load for the ports as described in Table 6.1, "Ports configuration for the load balancer" . For example, to configure ports for HAProxy, edit the /etc/haproxy/haproxy.cfg file to correspond with the table. For more information, see Configuration example for haproxy.cfg for HAProxy load balancer with Satellite 6 in the Red Hat Knowledgebase . Table 6.1. Ports configuration for the load balancer Service Port Mode Balance Mode Destination HTTP 80 TCP roundrobin port 80 on all Capsule Servers HTTPS and RHSM 443 TCP source port 443 on all Capsule Servers Anaconda for template retrieval 8000 TCP roundrobin port 8000 on all Capsule Servers Puppet ( Optional ) 8140 TCP roundrobin port 8140 on all Capsule Servers PuppetCA ( Optional ) 8141 TCP roundrobin port 8140 only on the system where you configure Capsule Server to sign Puppet certificates Capsule HTTPS for Host Registration and optionally OpenSCAP 9090 TCP roundrobin port 9090 on all Capsule Servers Configure the load balancer to disable SSL offloading and allow client-side SSL certificates to pass through to back end servers. This is required because communication from clients to Capsule Servers depends on client-side SSL certificates. Start and enable the HAProxy service: | [
"dnf install haproxy",
"dnf install policycoreutils-python-utils",
"semanage boolean --modify --on haproxy_connect_any",
"systemctl enable --now haproxy"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/configuring_capsules_with_a_load_balancer/Installing_the_Load_Balancer_load-balancing |
Chapter 4. Pools overview | Chapter 4. Pools overview Ceph clients store data in pools. When you create pools, you are creating an I/O interface for clients to store data. From the perspective of a Ceph client, that is, block device, gateway, and the rest, interacting with the Ceph storage cluster is remarkably simple: Create a cluster handle. Connect the cluster handle to the cluster. Create an I/O context for reading and writing objects and their extended attributes. Creating a cluster handle and connecting to the cluster To connect to the Ceph storage cluster, the Ceph client needs the following details: The cluster name (which Ceph by default) - not using usually because it sounds ambiguous. An initial monitor address. Ceph clients usually retrieve these parameters using the default path for the Ceph configuration file and then read it from the file, but a user might also specify the parameters on the command line too. The Ceph client also provides a user name and secret key, authentication is on by default. Then, the client contacts the Ceph monitor cluster and retrieves a recent copy of the cluster map, including its monitors, OSDs and pools. Creating a pool I/O context To read and write data, the Ceph client creates an I/O context to a specific pool in the Ceph storage cluster. If the specified user has permissions for the pool, the Ceph client can read from and write to the specified pool. Ceph's architecture enables the storage cluster to provide this remarkably simple interface to Ceph clients so that clients might select one of the sophisticated storage strategies you define simply by specifying a pool name and creating an I/O context. Storage strategies are invisible to the Ceph client in all but capacity and performance. Similarly, the complexities of Ceph clients, such as mapping objects into a block device representation or providing an S3/Swift RESTful service, are invisible to the Ceph storage cluster. A pool provides you with resilience, placement groups, CRUSH rules, and quotas. Resilience : You can set how many OSD are allowed to fail without losing data. For replicated pools, it is the desired number of copies or replicas of an object. A typical configuration stores an object and one additional copy, that is, size = 2 , but you can determine the number of copies or replicas. For erasure coded pools, it is the number of coding chunks, that is m=2 in the erasure code profile . Placement Groups : You can set the number of placement groups for the pool. A typical configuration uses approximately 50-100 placement groups per OSD to provide optimal balancing without using up too many computing resources. When setting up multiple pools, be careful to ensure you set a reasonable number of placement groups for both the pool and the cluster as a whole. CRUSH Rules : When you store data in a pool, a CRUSH rule mapped to the pool enables CRUSH to identify the rule for the placement of each object and its replicas, or chunks for erasure coded pools, in your cluster. You can create a custom CRUSH rule for your pool. Quotas : When you set quotas on a pool with ceph osd pool set-quota command, you might limit the maximum number of objects or the maximum number of bytes stored in the specified pool. 4.1. Pools and storage strategies overview To manage pools, you can list, create, and remove pools. You can also view the utilization statistics for each pool. 4.2. Listing pool List your cluster's pools: Example 4.3. Creating a pool Before creating pools, see the Configuration Guide for more details. It is better to adjust the default value for the number of placement groups, as the default value does not have to suit your needs: Example Create a replicated pool: Syntax Create an erasure-coded pool: Syntax Create a bulk pool: Syntax Where: POOL_NAME Description The name of the pool. It must be unique. Type String Required Yes. If not specified, it is set to the default value. Default ceph PG_NUM Description The total number of placement groups for the pool. See the Placement Groups section and the Ceph Placement Groups (PGs) per Pool Calculator for details on calculating a suitable number. The default value 8 is not suitable for most systems. Type Integer Required Yes Default 8 PGP_NUM Description The total number of placement groups for placement purposes. This value must be equal to the total number of placement groups, except for placement group splitting scenarios. Type Integer Required Yes. If not specified it is set to the default value. Default 8 replicated or erasure Description The pool type can be either replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability. The replicated pools require more raw storage but implement all Ceph operations. The erasure-coded pools require less raw storage but only implement a subset of the available operations. Type String Required No Default replicated CRUSH_RULE_NAME Description The name of the CRUSH rule for the pool. The rule MUST exist. For replicated pools, the name is the rule specified by the osd_pool_default_crush_rule configuration setting. For erasure-coded pools the name is erasure-code if you specify the default erasure code profile or POOL_NAME otherwise. Ceph creates this rule with the specified name implicitly if the rule does not already exist. Type String Required No Default Uses erasure-code for an erasure-coded pool. For replicated pools, it uses the value of the osd_pool_default_crush_rule variable from the Ceph configuration. EXPECTED_NUMBER_OBJECTS Description The expected number of objects for the pool. Ceph splits the placement groups at pool creation time to avoid the latency impact to perform runtime directory splitting. Type Integer Required No Default 0 , no splitting at the pool creation time. ERASURE_CODE_PROFILE Description For erasure-coded pools only. Use the erasure code profile. It must be an existing profile as defined by the osd erasure-code-profile set variable in the Ceph configuration file. For further information, see the Erasure Code Profiles section. Type String Required No When you create a pool, set the number of placement groups to a reasonable value, for example to 100 . Consider the total number of placement groups per OSD. Placement groups are computationally expensive, so performance degrades when you have many pools with many placement groups, for example, 50 pools with 100 placement groups each. The point of diminishing returns depends upon the power of the OSD host. Additional Resources See the Placement Groups section and Ceph Placement Groups (PGs) per Pool Calculator for details on calculating an appropriate number of placement groups for your pool. 4.4. Setting pool quota You can set pool quotas for the maximum number of bytes and the maximum number of objects per pool. Syntax Example To remove a quota, set its value to 0 . Note In-flight write operations might overrun pool quotas for a short time until Ceph propagates the pool usage across the cluster. This is normal behavior. Enforcing pool quotas on in-flight write operations would impose significant performance penalties. 4.5. Deleting a pool Delete a pool: Syntax Important To protect data, storage administrators cannot delete pools by default. Set the mon_allow_pool_delete configuration option before deleting pools. If a pool has its own rule, consider removing it after deleting the pool. If a pool has users strictly for its own use, consider deleting those users after deleting the pool. 4.6. Renaming a pool Rename a pool: Syntax If you rename a pool and you have per-pool capabilities for an authenticated user, you must update the user's capabilities, that is caps, with the new pool name. 4.7. Migrating a pool Sometimes it is necessary to migrate all objects from one pool to another. This is done in cases such as needing to change parameters that cannot be modified on a specific pool. For example, needing to reduce the number of placement groups of a pool. Important When a workload is using only Ceph Block Device images, follow the procedures documented for moving and migrating a pool within the Red Hat Ceph Storage Block Device Guide : Moving images between pools Migrating pools The migration methods described for Ceph Block Device are more recommended than those documented here. using the cppool does not preserve all snapshots and snapshot related metadata, resulting in an unfaithful copy of the data. For example, copying an RBD pool does not completely copy the image. In this case, snaps are not present and will not work properly. The cppool also does not preserve the user_version field that some librados users may rely on. If migrating a pool is necessary and your user workloads contain images other than Ceph Block Devices, continue with one of the procedures documented here. Prerequisites If using the rados cppool command: Read-only access to the pool is required. Only use this command if you do not have RBD images and its snaps and user_version consumed by librados. If using the local drive RADOS commands, verify that sufficient cluster space is available. Two, three, or more copies of data will be present as per pool replication factor. Procedure Method one - the recommended direct way Copy all objects with the rados cppool command. Important Read-only access to the pool is required during copy. Syntax Example Method two - using a local drive Use the rados export and rados import commands and a temporary local directory to save all exported data. Syntax Example Required. Stop all I/O to the source pool. Required. Resynchronize all modified objects. Syntax Example 4.8. Viewing pool statistics Show a pool's utilization statistics: Example 4.9. Setting pool values Set a value to a pool: Syntax The Pool Values section lists all key-values pairs that you can set. 4.10. Getting pool values Get a value from a pool: Syntax You can view the list of all key-values pairs that you might get in the Pool Values section. 4.11. Enabling a client application Red Hat Ceph Storage provides additional protection for pools to prevent unauthorized types of clients from writing data to the pool. This means that system administrators must expressly enable pools to receive I/O operations from Ceph Block Device, Ceph Object Gateway, Ceph Filesystem or for a custom application. Enable a client application to conduct I/O operations on a pool: Syntax Where APP is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device. rgw for the Ceph Object Gateway. Note Specify a different APP value for a custom application. Important A pool that is not enabled will generate a HEALTH_WARN status. In that scenario, the output for ceph health detail -f json-pretty gives the following output: Note Initialize pools for the Ceph Block Device with rbd pool init POOL_NAME . 4.12. Disabling a client application Disable a client application from conducting I/O operations on a pool: Syntax Where APP is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device. rgw for the Ceph Object Gateway. Note Specify a different APP value for a custom application. 4.13. Setting application metadata Provides the functionality to set key-value pairs describing attributes of the client application. Set client application metadata on a pool: Syntax Where APP is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device rgw for the Ceph Object Gateway Note Specify a different APP value for a custom application. 4.14. Removing application metadata Remove client application metadata on a pool: Syntax Where APP is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device rgw for the Ceph Object Gateway Note Specify a different APP value for a custom application. 4.15. Setting the number of object replicas Set the number of object replicas on a replicated pool: Syntax You can run this command for each pool. Important The NUMBER_OF_REPLICAS parameter includes the object itself. If you want to include the object and two copies of the object for a total of three instances of the object, specify 3 . Example Note An object might accept I/O operations in degraded mode with fewer replicas than specified by the pool size setting. To set a minimum number of required replicas for I/O, use the min_size setting. Example This ensures that no object in the data pool receives an I/O with fewer replicas than specified by the min_size setting. 4.16. Getting the number of object replicas Get the number of object replicas: Example Ceph lists the pools, with the replicated size attribute highlighted. By default, Ceph creates two replicas of an object, that is a total of three copies, or a size of 3 . 4.17. Pool values The following list contains key-values pairs that you can set or get. For further information, see the Set Pool Values and Getting Pool Values sections. size Description Specifies the number of replicas for objects in the pool. See the Setting the Number of Object Replicas section for further details. Applicable for the replicated pools only. Type Integer min_size Description Specifies the minimum number of replicas required for I/O. See the Setting the Number of Object Replicas section for further details. For erasure-coded pools, this should be set to a value greater than k . If I/O is allowed at the value k , then there is no redundancy and data is lost in the event of a permanent OSD failure. For more information, see Erasure code pools overview . Type Integer crash_replay_interval Description Specifies the number of seconds to allow clients to replay acknowledged, but uncommitted requests. Type Integer pg-num Description The total number of placement groups for the pool. See the Pool, placement groups, and CRUSH Configuration Reference section in the Red Hat Ceph Storage Configuration Guide for details on calculating a suitable number. The default value 8 is not suitable for most systems. Type Integer Required Yes. Default 8 pgp-num Description The total number of placement groups for placement purposes. This should be equal to the total number of placement groups , except for placement group splitting scenarios. Type Integer Required Yes. Picks up default or Ceph configuration value if not specified. Default 8 Valid Range Equal to or less than what specified by the pg_num variable. crush_rule Description The rule to use for mapping object placement in the cluster. Type String hashpspool Description Enable or disable the HASHPSPOOL flag on a given pool. With this option enabled, pool hashing and placement group mapping are changed to improve the way pools and placement groups overlap. Type Integer Valid Range 1 enables the flag, 0 disables the flag. Important Do not enable this option on production pools of a cluster with a large amount of OSDs and data. All placement groups in the pool would have to be remapped causing too much data movement. fast_read Description On a pool that uses erasure coding, if this flag is enabled, the read request issues subsequent reads to all shards, and waits until it receives enough shards to decode to serve the client. In the case of the jerasure and isa erasure plug-ins, once the first K replies return, the client's request is served immediately using the data decoded from these replies. This helps to allocate some resources for better performance. Currently this flag is only supported for erasure coding pools. Type Boolean Defaults 0 allow_ec_overwrites Description Whether writes to an erasure coded pool can update part of an object, so the Ceph Filesystem and Ceph Block Device can use it. Type Boolean compression_algorithm Description Sets inline compression algorithm to use with the BlueStore storage backend. This setting overrides the bluestore_compression_algorithm configuration setting. Type String Valid Settings lz4 , snappy , zlib , zstd compression_mode Description Sets the policy for the inline compression algorithm for the BlueStore storage backend. This setting overrides the bluestore_compression_mode configuration setting. Type String Valid Settings none , passive , aggressive , force compression_min_blob_size Description BlueStore does not compress chunks smaller than this size. This setting overrides the bluestore_compression_min_blob_size configuration setting. Type Unsigned Integer compression_max_blob_size Description BlueStore breaks chunks larger than this size into smaller blobs of compression_max_blob_size before compressing the data. Type Unsigned Integer nodelete Description Set or unset the NODELETE flag on a given pool. Type Integer Valid Range 1 sets flag. 0 unsets flag. nopgchange Description Set or unset the NOPGCHANGE flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. nosizechange Description Set or unset the NOSIZECHANGE flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. write_fadvise_dontneed Description Set or unset the WRITE_FADVISE_DONTNEED flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. noscrub Description Set or unset the NOSCRUB flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. nodeep-scrub Description Set or unset the NODEEP_SCRUB flag on a given pool. Type Integer Valid Range 1 sets the flag. 0 unsets the flag. scrub_min_interval Description The minimum interval in seconds for pool scrubbing when load is low. If it is 0 , Ceph uses the osd_scrub_min_interval configuration setting. Type Double Default 0 scrub_max_interval Description The maximum interval in seconds for pool scrubbing irrespective of cluster load. If it is 0 , Ceph uses the osd_scrub_max_interval configuration setting. Type Double Default 0 deep_scrub_interval Description The interval in seconds for pool 'deep' scrubbing. If it is 0 , Ceph uses the osd_deep_scrub_interval configuration setting. Type Double Default 0 | [
"ceph osd lspools",
"ceph config set global osd_pool_default_pg_num 250 ceph config set global osd_pool_default_pgp_num 250",
"ceph osd pool create POOL_NAME PG_NUM PGP_NUM [replicated] [ CRUSH_RULE_NAME ] [ EXPECTED_NUMBER_OBJECTS ]",
"ceph osd pool create POOL_NAME PG_NUM PGP_NUM erasure [ ERASURE_CODE_PROFILE ] [ CRUSH_RULE_NAME ] [ EXPECTED_NUMBER_OBJECTS ]",
"ceph osd pool create POOL_NAME [--bulk]",
"ceph osd pool set-quota POOL_NAME [max_objects OBJECT_COUNT ] [max_bytes BYTES ]",
"ceph osd pool set-quota data max_objects 10000",
"ceph osd pool delete POOL_NAME [ POOL_NAME --yes-i-really-really-mean-it]",
"ceph osd pool rename CURRENT_POOL_NAME NEW_POOL_NAME",
"ceph osd pool create NEW_POOL PG_NUM [ <other new pool parameters> ] rados cppool SOURCE_POOL NEW_POOL ceph osd pool rename SOURCE_POOL NEW_SOURCE_POOL_NAME ceph osd pool rename NEW_POOL SOURCE_POOL",
"ceph osd pool create pool1 250 rados cppool pool2 pool1 ceph osd pool rename pool2 pool3 ceph osd pool rename pool1 pool2",
"ceph osd pool create NEW_POOL PG_NUM [ <other new pool parameters> ] rados export --create SOURCE_POOL FILE_PATH rados import FILE_PATH NEW_POOL",
"ceph osd pool create pool1 250 rados export --create pool2 <path of export file> rados import <path of export file> pool1",
"rados export --workers 5 SOURCE_POOL FILE_PATH rados import --workers 5 FILE_PATH NEW_POOL",
"rados export --workers 5 pool2 <path of export file> rados import --workers 5 <path of export file> pool1",
"[ceph: root@host01 /] rados df",
"ceph osd pool set POOL_NAME KEY VALUE",
"ceph osd pool get POOL_NAME KEY",
"ceph osd pool application enable POOL_NAME APP {--yes-i-really-mean-it}",
"{ \"checks\": { \"POOL_APP_NOT_ENABLED\": { \"severity\": \"HEALTH_WARN\", \"summary\": { \"message\": \"application not enabled on 1 pool(s)\" }, \"detail\": [ { \"message\": \"application not enabled on pool '_POOL_NAME_'\" }, { \"message\": \"use 'ceph osd pool application enable _POOL_NAME_ _APP_', where _APP_ is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.\" } ] } }, \"status\": \"HEALTH_WARN\", \"overall_status\": \"HEALTH_WARN\", \"detail\": [ \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\" ] }",
"ceph osd pool application disable POOL_NAME APP {--yes-i-really-mean-it}",
"ceph osd pool application set POOL_NAME APP KEY",
"ceph osd pool application rm POOL_NAME APP KEY",
"ceph osd pool set POOL_NAME size NUMBER_OF_REPLICAS",
"ceph osd pool set data size 3",
"ceph osd pool set data min_size 2",
"ceph osd dump | grep 'replicated size'"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/storage_strategies_guide/pools-overview_strategy |
2.2. Red Hat Cluster Suite | 2.2. Red Hat Cluster Suite Red Hat GFS runs with Red Hat Cluster Suite 4.0 or later. The Red Hat Cluster Suite software must be installed on the cluster nodes before you can install and run Red Hat GFS. Note Red Hat Cluster Suite 4.0 and later provides the infrastructure for application failover in the cluster and network communication among GFS nodes (and other Red Hat Cluster Suite nodes). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-sysreq-rhcs |
function::symdata | function::symdata Name function::symdata - Return the kernel symbol and module offset for the address. Synopsis Arguments addr The address to translate. General Syntax symdata:string(addr:long) Description Returns the (function) symbol name associated with the given address if known, the offset from the start and size of the symbol, plus module name (between brackets). If symbol is unknown, but module is known, the offset inside the module, plus the size of the module is added. If any element is not known it will be omitted and if the symbol name is unknown it will return the hex string for the given address. | [
"function symdata:string(addr:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-symdata |
Chapter 4. Specifics of Individual Software Collections | Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Main Features section of the Red Hat Developer Toolset Release Notes . For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Note that since Red Hat Developer Toolset 3.1, Red Hat Developer Toolset requires the rh-java-common Software Collection. 4.2. Ruby on Rails 5.0 Red Hat Software Collections 3.2 provides the rh-ruby24 Software Collection together with the rh-ror50 Collection. To install Ruby on Rails 5.0 , type the following command as root : yum install rh-ror50 Installing any package from the rh-ror50 Software Collection automatically pulls in rh-ruby24 and rh-nodejs6 as dependencies. The rh-nodejs6 Collection is used by certain gems in an asset pipeline to post-process web resources, for example, sass or coffee-script source files. Additionally, the Action Cable framework uses rh-nodejs6 for handling WebSockets in Rails. To run the rails s command without requiring rh-nodejs6 , disable the coffee-rails and uglifier gems in the Gemfile . To run Ruby on Rails without Node.js , run the following command, which will automatically enable rh-ruby24 : scl enable rh-ror50 bash To run Ruby on Rails with all features, enable also the rh-nodejs6 Software Collection: scl enable rh-ror50 rh-nodejs6 bash The rh-ror50 Software Collection is supported together with the rh-ruby24 and rh-nodejs6 components. 4.3. MongoDB 3.6 The rh-mongodb36 Software Collection is available only for Red Hat Enterprise Linux 7. See Section 4.4, "MongoDB 3.4" for instructions on how to use MongoDB 3.4 on Red Hat Enterprise Linux 6. To install the rh-mongodb36 collection, type the following command as root : yum install rh-mongodb36 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb36 'mongo' Note The rh-mongodb36-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6 or later. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb36-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb36-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb36-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb36-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.4. MongoDB 3.4 To install the rh-mongodb34 collection, type the following command as root : yum install rh-mongodb34 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb34 'mongo' Note The rh-mongodb34-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . MongoDB 3.4 on Red Hat Enterprise Linux 6 If you are using Red Hat Enterprise Linux 6, the following instructions apply to your system. To start the MongoDB daemon, type the following command as root : service rh-mongodb34-mongod start To start the MongoDB daemon on boot, type this command as root : chkconfig rh-mongodb34-mongod on To start the MongoDB sharding server, type this command as root : service rh-mongodb34-mongos start To start the MongoDB sharding server on boot, type the following command as root : chkconfig rh-mongodb34-mongos on Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. MongoDB 3.4 on Red Hat Enterprise Linux 7 When using Red Hat Enterprise Linux 7, the following commands are applicable. To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb34-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb34-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb34-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb34-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.5. Maven The rh-maven35 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven35 Collection, type the following command as root : yum install rh-maven35 To enable this collection, type the following command at a shell prompt: scl enable rh-maven35 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven35/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.6. Passenger The rh-passenger40 Software Collection provides Phusion Passenger , a web and application server designed to be fast, robust and lightweight. The rh-passenger40 Collection supports multiple versions of Ruby , particularly the ruby193 , ruby200 , and rh-ruby22 Software Collections together with Ruby on Rails using the ror40 or rh-ror41 Collections. Prior to using Passenger with any of the Ruby Software Collections, install the corresponding package from the rh-passenger40 Collection: the rh-passenger-ruby193 , rh-passenger-ruby200 , or rh-passenger-ruby22 package. The rh-passenger40 Software Collection can also be used with Apache httpd from the httpd24 Software Collection. To do so, install the rh-passenger40-mod_passenger package. Refer to the default configuration file /opt/rh/httpd24/root/etc/httpd/conf.d/passenger.conf for an example of Apache httpd configuration, which shows how to use multiple Ruby versions in a single Apache httpd instance. Additionally, the rh-passenger40 Software Collection can be used with the nginx 1.6 web server from the nginx16 Software Collection. To use nginx 1.6 with rh-passenger40 , you can run Passenger in Standalone mode using the following command in the web appplication's directory: scl enable nginx16 rh-passenger40 'passenger start' Alternatively, edit the nginx16 configuration files as described in the upstream Passenger documentation . 4.7. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.1, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers: yes - the combination is supported no - the combination is not supported Table 4.1. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis rh-nodejs4 no no no no no rh-nodejs6 no no no no no rh-nodejs8 no no no no no rh-nodejs10 no no no no no rh-perl520 yes no yes yes no rh-perl524 yes no yes yes no rh-perl526 yes no yes yes no rh-php56 yes yes yes yes no rh-php70 yes no yes yes no rh-php71 yes no yes yes no rh-php72 yes no yes yes no python27 yes yes yes yes no rh-python34 no yes no yes no rh-python35 yes yes yes yes no rh-python36 yes yes yes yes no rh-ror41 yes yes yes yes no rh-ror42 yes yes yes yes no rh-ror50 yes yes yes yes no rh-ruby25 yes yes yes yes no | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.2_release_notes/chap-individual_collections |
8.210. sudo | 8.210. sudo 8.210.1. RHSA-2013:1701 - Low: sudo security, bug fix and enhancement update An updated sudo package that fixes two security issues, several bugs, and adds two enhancements is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The sudo (superuser do) utility allows system administrators to give certain users the ability to run commands as root. Security Fixes CVE-2013-1775 A flaw was found in the way sudo handled time stamp files. An attacker able to run code as a local user and with the ability to control the system clock could possibly gain additional privileges by running commands that the victim user was allowed to run via sudo, without knowing the victim's password. CVE-2013-2776 , CVE-2013-2777 It was found that sudo did not properly validate the controlling terminal device when the tty_tickets option was enabled in the /etc/sudoers file. An attacker able to run code as a local user could possibly gain additional privileges by running commands that the victim user was allowed to run via sudo, without knowing the victim's password. Bug Fixes BZ# 880150 Previously, sudo did not support netgroup filtering for sources from the System Security Services Daemon (SSSD). Consequently, SSSD rules were applied to all users even when they did not belong to the specified netgroup. With this update, netgroup filtering for SSSD sources has been implemented. As a result, rules with a netgroup specification are applied only to users that are part of the netgroup. BZ# 947276 When the sudo utility set up the environment in which it ran a command, it reset the value of the RLIMIT_NPROC resource limit to the parent's value of this limit if both the soft (current) and hard (maximum) values of RLIMIT_NPROC were not limited. An upstream patch has been provided to address this bug and RLIMIT_NPROC can now be set to "unlimited". BZ# 973228 Due to the refactoring of the sudo code by upstream, the SUDO_USER variable that stores the name of the user running the sudo command was not logged to the /var/log/secure file as before. Consequently, user name "root" was always recorded instead of the real user name. With this update, the behavior of sudo has been restored. As a result, the expected user name is now written to /var/log/secure. BZ# 994626 Due to an error in a loop condition in sudo's rule listing code, a buffer overflow could have occurred in certain cases. This condition has been fixed and the buffer overflow no longer occurs. Enhancements BZ# 848111 With this update, sudo has been modified to send debug messages about netgroup matching to the debug log. These messages should provide better understanding of how sudo matches netgroup database records with values from the running system and what the values are exactly. BZ# 853542 With this update, sudo has been modified to accept the ipa_hostname value from the /etc/sssd/sssd.conf configuration file when matching netgroups. All sudo users are advised to upgrade to this updated package, which contains backported patches to correct these issues and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/sudo |
Appendix A. Comparison between Ceph Ansible and Cephadm | Appendix A. Comparison between Ceph Ansible and Cephadm Cephadm is used for the containerized deployment of the storage cluster. The tables compare Cephadm with Ceph-Ansible playbooks for managing the containerized deployment of a Ceph cluster for day one and day two operations. Table A.1. Day one operations Description Ceph-Ansible Cephadm Installation of the Red Hat Ceph Storage cluster Run the site-container.yml playbook. Run cephadm bootstrap command to bootstrap the cluster on the admin node. Addition of hosts Use the Ceph Ansible inventory. Run ceph orch add host HOST_NAME to add hosts to the cluster. Addition of monitors Run the add-mon.yml playbook. Run the ceph orch apply mon command. Addition of managers Run the site-container.yml playbook. Run the ceph orch apply mgr command. Addition of OSDs Run the add-osd.yml playbook. Run the ceph orch apply osd command to add OSDs on all available devices or on specific hosts. Addition of OSDs on specific devices Select the devices in the osd.yml file and then run the add-osd.yml playbook. Select the paths filter under the data_devices in the osd.yml file and then run ceph orch apply -i FILE_NAME .yml command. Addition of MDS Run the site-container.yml playbook. Run the ceph orch apply FILESYSTEM_NAME command to add MDS. Addition of Ceph Object Gateway Run the site-container.yml playbook. Run the ceph orch apply rgw commands to add Ceph Object Gateway. Table A.2. Day two operations Description Ceph-Ansible Cephadm Removing hosts Use the Ansible inventory. Run ceph orch host rm HOST_NAME to remove the hosts. Removing monitors Run the shrink-mon.yml playbook. Run ceph orch apply mon to redeploy other monitors. Removing managers Run the shrink-mon.yml playbook. Run ceph orch apply mgr to redeploy other managers. Removing OSDs Run the shrink-osd.yml playbook. Run ceph orch osd rm OSD_ID to remove the OSDs. Removing MDS Run the shrink-mds.yml playbook. Run ceph orch rm SERVICE_NAME to remove the specific service. Exporting Ceph File System over NFS Protocol. Not supported on Red Hat Ceph Storage 4. Run ceph nfs export create command. Deployment of Ceph Object Gateway Run the site-container.yml playbook. Run ceph orch apply rgw SERVICE_NAME to deploy Ceph Object Gateway service. Removing Ceph Object Gateway Run the shrink-rgw.yml playbook. Run ceph orch rm SERVICE_NAME to remove the specific service. Block device mirroring Run the site-container.yml playbook. Run ceph orch apply rbd-mirror command. Minor version upgrade of Red Hat Ceph Storage Run the infrastructure-playbooks/rolling_update.yml playbook. Run ceph orch upgrade start command. Deployment of monitoring stack Edit the all.yml file during installation. Run the ceph orch apply -i FILE .yml after specifying the services. Additional Resources For more details on using the Ceph Orchestrator, see the Red Hat Ceph Storage Operations Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/installation_guide/comparison-between-ceph-ansible-and-cephadm_install |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.25/making-open-source-more-inclusive |
28.3. Time Zone Configuration | 28.3. Time Zone Configuration The third tabbed window that appears is for configuring the system time zone. To configure the system time zone, click the Time Zone tab. The time zone can be changed by either using the interactive map or by choosing the desired time zone from the list below the map. To use the map, click on the desired region. The map zooms into the region selected, after which you may choose the city specific to your time zone. A red X appears and the time zone selection changes in the list below the map. Alternatively, you can also use the list below the map. In the same way that the map lets you choose a region before choosing a city, the list of time zones is now a treelist, with cities and countries grouped within their specific continents. Non-geographic time zones have also been added to address needs in the scientific community. Click OK to apply the changes and exit the program. If your system clock is set to use UTC, select the System clock uses UTC option. UTC stands for the Universal Time, Coordinated , also known as Greenwich Mean Time (GMT). Other time zones are determined by adding or subtracting from the UTC time. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-dateconfig-time-zone |
Chapter 4. Managing namespace buckets | Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface binary from the customer portal and make it executable. Note Choose either Linux(x86_64), Windows, or Mac OS. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage Object Storage Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Object Storage . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . allowed_buckets A comma separated list of bucket names to which the user is allowed to have access and management rights. default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). full_permission Indicates whether the account should be allowed full permission or not. Supported values are true or false . Default value is false . new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace. | [
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa account create <noobaa-account-name> [flags]",
"noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore",
"NooBaaAccount spec: allow_bucket_creation: true Allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>",
"noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17s",
"oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001",
"oc get ns <application_namespace> -o yaml | grep scc",
"oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000",
"oc project <application_namespace>",
"oc project testnamespace",
"oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s",
"oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s",
"oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}",
"oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]",
"oc exec -it <pod_name> -- df <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"oc get pv | grep <pv_name>",
"oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s",
"oc get pv <pv_name> -o yaml",
"oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound",
"cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF",
"oc create -f <YAML_file>",
"oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created",
"oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s",
"oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".",
"noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'",
"noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'",
"oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace",
"noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'",
"noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'",
"oc exec -it <pod_name> -- mkdir <mount_path> /nsfs",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs",
"noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'",
"noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'",
"oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"noobaa bucket delete <bucket_name>",
"noobaa bucket delete legacy-bucket",
"noobaa account delete <user_account>",
"noobaa account delete leguser",
"noobaa namespacestore delete <nsfs_namespacestore>",
"noobaa namespacestore delete legacy-namespace",
"oc delete pv <cephfs_pv_name>",
"oc delete pvc <cephfs_pvc_name>",
"oc delete pv cephfs-pv-legacy-openshift-storage",
"oc delete pvc cephfs-pvc-legacy",
"oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"oc edit ns <appplication_namespace>",
"oc edit ns testnamespace",
"oc get ns <application_namespace> -o yaml | grep sa.scc.mcs",
"oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF",
"oc create -f scc.yaml",
"oc create serviceaccount <service_account_name>",
"oc create serviceaccount testnamespacesa",
"oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>",
"oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa",
"oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'",
"oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'",
"oc edit dc <pod_name> -n <application_namespace>",
"spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>",
"oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace",
"spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0",
"oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext",
"oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/Managing-namespace-buckets_rhodf |
Chapter 14. Converting a connected cluster to a disconnected cluster | Chapter 14. Converting a connected cluster to a disconnected cluster There might be some scenarios where you need to convert your OpenShift Container Platform cluster from a connected cluster to a disconnected cluster. A disconnected cluster, also known as a restricted cluster, does not have an active connection to the internet. As such, you must mirror the contents of your registries and installation media. You can create this mirror registry on a host that can access both the internet and your closed network, or copy images to a device that you can move across network boundaries. This topic describes the general process for converting an existing, connected cluster into a disconnected cluster. 14.1. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. 14.2. Prerequisites The oc client is installed. A running cluster. An installed mirror registry, which is a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an subscription to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . The mirror repository must be configured to share images. For example, a Red Hat Quay repository requires Organizations in order to share images. Access to the internet to obtain the necessary container images. 14.3. Preparing the cluster for mirroring Before disconnecting your cluster, you must mirror, or copy, the images to a mirror registry that is reachable by every node in your disconnected cluster. In order to mirror the images, you must prepare your cluster by: Adding the mirror registry certificates to the list of trusted CAs on your host. Creating a .dockerconfigjson file that contains your image pull secret, which is from the cloud.openshift.com token. Procedure Configuring credentials that allow image mirroring: Add the CA certificate for the mirror registry, in the simple PEM or DER file formats, to the list of trusted CAs. For example: USD cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/ where, </path/to/cert.crt> Specifies the path to the certificate on your local file system. Update the CA trust. For example, in Linux: USD update-ca-trust Extract the .dockerconfigjson file from the global pull secret: USD oc extract secret/pull-secret -n openshift-config --confirm --to=. Example output .dockerconfigjson Edit the .dockerconfigjson file to add your mirror registry and authentication credentials and save it as a new file: {"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}},"<registry>:<port>/<namespace>/":{"auth":"<token>"}}} where: <local_registry> Specifies the registry domain name, and optionally the port, that your mirror registry uses to serve content. auth Specifies the base64-encoded user name and password for your mirror registry. <registry>:<port>/<namespace> Specifies the mirror registry details. <token> Specifies the base64-encoded username:password for your mirror registry. For example: USD {"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==","email":"[email protected]"}, "quay.io":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==","email":"[email protected]"}, "registry.connect.redhat.com"{"auth":"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==","email":"[email protected]"}, "registry.redhat.io":{"auth":"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==","email":"[email protected]"}, "registry.svc.ci.openshift.org":{"auth":"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV"},"my-registry:5000/my-namespace/":{"auth":"dXNlcm5hbWU6cGFzc3dvcmQ="}}} 14.4. Mirroring the images After the cluster is properly configured, you can mirror the images from your external repositories to the mirror repository. Procedure Mirror the Operator Lifecycle Manager (OLM) images: USD oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds> where: product-version Specifies the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.8 . mirror_registry Specifies the fully qualified domain name (FQDN) for the target registry and namespace to mirror the Operator content to, where <namespace> is any existing namespace on the registry. reg_creds Specifies the location of your modified .dockerconfigjson file. For example: USD oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*' Mirror the content for any other Red Hat-provided Operator: USD oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds> where: index_image Specifies the index image for the catalog that you want to mirror. mirror_registry Specifies the FQDN for the target registry and namespace to mirror the Operator content to, where <namespace> is any existing namespace on the registry. reg_creds Optional: Specifies the location of your registry credentials file, if required. For example: USD oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*' Mirror the OpenShift Container Platform image repository: USD oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture> where: product-version Specifies the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.8.15-x86_64 . architecture Specifies the type of architecture for your server, such as x86_64 . local_registry Specifies the registry domain name for your mirror repository. local_repository Specifies the name of the repository to create in your registry, such as ocp4/openshift4 . For example: USD oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64 Example output info: Mirroring 109 images to mirror.registry.com/ocp/release ... mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release Mirror any other registries, as needed: USD oc image mirror <online_registry>/my/image:latest <mirror_registry> Additional information For more information about mirroring Operator catalogs, see Mirroring an Operator catalog . For more information about the oc adm catalog mirror command, see the OpenShift CLI administrator command reference . 14.5. Configuring the cluster for the mirror registry After creating and mirroring the images to the mirror registry, you must modify your cluster so that pods can pull images from the mirror registry. You must: Add the mirror registry credentials to the global pull secret. Add the mirror registry server certificate to the cluster. Create an ImageContentSourcePolicy custom resource (ICSP), which associates the mirror registry with the source registry. Add mirror registry credential to the cluster global pull-secret: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. For example: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson Add the CA-signed mirror registry server certificate to the nodes in the cluster: Create a config map that includes the server certificate for the mirror registry USD oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config For example: S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config Use the config map to update the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"<config_map_name>"}}}' --type=merge For example: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Create an ICSP to redirect container pull requests from the online registries to the mirror registry: Create the ImageContentSourcePolicy custom resource: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the ICSP object: USD oc create -f registryrepomirror.yaml Example output imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Verify that the credentials, CA, and ICSP for mirror registry were added: Log into a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Check the config.json file for the credentials: sh-4.4# cat /var/lib/kubelet/config.json Example output {"auths":{"brew.registry.redhat.io":{"xx=="},"brewregistry.stage.redhat.io":{"auth":"xxx=="},"mirror.registry.com:443":{"auth":"xx="}}} 1 1 Ensure that the mirror registry and credentials are present. Change to the certs.d directory sh-4.4# cd /etc/docker/certs.d/ List the certificates in the certs.d directory: sh-4.4# ls Example output 1 Ensure that the mirror registry is in the list. Check that the ICSP added the mirror registry to the registries.conf file: sh-4.4# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "quay.io/openshift-release-dev/ocp-release" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.registry.com:443/ocp/release" [[registry]] prefix = "" location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.registry.com:443/ocp/release" The registry.mirror parameters indicate that the mirror registry is searched before the original registry. Exit the node. sh-4.4# exit 14.6. Ensure applications continue to work Before disconnecting the cluster from the network, ensure that your cluster is working as expected and all of your applications are working as expected. Procedure Use the following commands to check the status of your cluster: Ensure your pods are running: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m ... Ensure your nodes are in the READY status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.27.3 14.7. Disconnect the cluster from the network After mirroring all the required repositories and configuring your cluster to work as a disconnected cluster, you can disconnect the cluster from the network. Note The Insights Operator is degraded when the cluster loses its Internet connection. You can avoid this problem by temporarily disabling the Insights Operator until you can restore it. 14.8. Restoring a degraded Insights Operator Disconnecting the cluster from the network necessarily causes the cluster to lose the Internet connection. The Insights Operator becomes degraded because it requires access to Red Hat Insights . This topic describes how to recover from a degraded Insights Operator. Procedure Edit your .dockerconfigjson file to remove the cloud.openshift.com entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"[email protected]"} Save the file. Update the cluster secret with the edited .dockerconfigjson file: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson Verify that the Insights Operator is no longer degraded: USD oc get co insights Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d 14.9. Restoring the network If you want to reconnect a disconnected cluster and pull images from online registries, delete the cluster's ImageContentSourcePolicy (ICSP) objects. Without the ICSP, pull requests to external registries are no longer redirected to the mirror registry. Procedure View the ICSP objects in your cluster: USD oc get imagecontentsourcepolicy Example output NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h Delete all the ICSP objects you created when disconnecting your cluster: USD oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name> For example: USD oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0 Example output imagecontentsourcepolicy.operator.openshift.io "mirror-ocp" deleted imagecontentsourcepolicy.operator.openshift.io "ocp4-index-0" deleted imagecontentsourcepolicy.operator.openshift.io "qe45-index-0" deleted Wait for all the nodes to restart and return to the READY status and verify that the registries.conf file is pointing to the original registries and not the mirror registries: Log into a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Examine the registries.conf file: sh-4.4# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] 1 1 The registry and registry.mirror entries created by the ICSPs you deleted are removed. | [
"cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/",
"update-ca-trust",
"oc extract secret/pull-secret -n openshift-config --confirm --to=.",
".dockerconfigjson",
"{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}},\"<registry>:<port>/<namespace>/\":{\"auth\":\"<token>\"}}}",
"{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"quay.io\":{\"auth\":\"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"registry.connect.redhat.com\"{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==\",\"email\":\"[email protected]\"}, \"registry.redhat.io\":{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==\",\"email\":\"[email protected]\"}, \"registry.svc.ci.openshift.org\":{\"auth\":\"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV\"},\"my-registry:5000/my-namespace/\":{\"auth\":\"dXNlcm5hbWU6cGFzc3dvcmQ=\"}}}",
"oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds>",
"oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'",
"oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds>",
"oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'",
"oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture>",
"oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64",
"info: Mirroring 109 images to mirror.registry.com/ocp/release mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release",
"oc image mirror <online_registry>/my/image:latest <mirror_registry>",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson",
"oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config",
"S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"<config_map_name>\"}}}' --type=merge",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /var/lib/kubelet/config.json",
"{\"auths\":{\"brew.registry.redhat.io\":{\"xx==\"},\"brewregistry.stage.redhat.io\":{\"auth\":\"xxx==\"},\"mirror.registry.com:443\":{\"auth\":\"xx=\"}}} 1",
"sh-4.4# cd /etc/docker/certs.d/",
"sh-4.4# ls",
"image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000 mirror.registry.com:443 1",
"sh-4.4# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-release\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\" [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\"",
"sh-4.4# exit",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.27.3 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.27.3",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"[email protected]\"}",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson",
"oc get co insights",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d",
"oc get imagecontentsourcepolicy",
"NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h",
"oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name>",
"oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0",
"imagecontentsourcepolicy.operator.openshift.io \"mirror-ocp\" deleted imagecontentsourcepolicy.operator.openshift.io \"ocp4-index-0\" deleted imagecontentsourcepolicy.operator.openshift.io \"qe45-index-0\" deleted",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/postinstallation_configuration/connected-to-disconnected |
Uploading content to Red Hat Automation Hub | Uploading content to Red Hat Automation Hub Red Hat Ansible Automation Platform 2.3 Uploading your collections to Automation Hub Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/uploading_content_to_red_hat_automation_hub/index |
2.11. Thin Provisioning and Storage Over-Commitment | 2.11. Thin Provisioning and Storage Over-Commitment The Red Hat Virtualization Manager provides provisioning policies to optimize storage usage within the virtualization environment. A thin provisioning policy allows you to over-commit storage resources, provisioning storage based on the actual storage usage of your virtualization environment. Storage over-commitment is the allocation of more storage to virtual machines than is physically available in the storage pool. Generally, virtual machines use less storage than what has been allocated to them. Thin provisioning allows a virtual machine to operate as if the storage defined for it has been completely allocated, when in fact only a fraction of the storage has been allocated. Note While the Red Hat Virtualization Manager provides its own thin provisioning function, you should use the thin provisioning functionality of your storage back-end if it provides one. To support storage over-commitment, VDSM defines a threshold which compares logical storage allocation with actual storage usage. This threshold is used to make sure that the data written to a disk image is smaller than the logical volume that backs the disk image. QEMU identifies the highest offset written to in a logical volume, which indicates the point of greatest storage use. VDSM monitors the highest offset marked by QEMU to ensure that the usage does not cross the defined threshold. So long as VDSM continues to indicate that the highest offset remains below the threshold, the Red Hat Virtualization Manager knows that the logical volume in question has sufficient storage to continue operations. When QEMU indicates that usage has risen to exceed the threshold limit, VDSM communicates to the Manager that the disk image will soon reach the size of it's logical volume. The Red Hat Virtualization Manager requests that the SPM host extend the logical volume. This process can be repeated as long as the data storage domain for the data center has available space. When the data storage domain runs out of available free space, you must manually add storage capacity to expand it. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/over-commitment |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/providing-feedback-on-red-hat-documentation_osp |
18.4. Actions | 18.4. Actions 18.4.1. Allocate Virtual Machine Action The allocate virtual machine action allocates a virtual machine in the virtual machine pool. Example 18.5. Action to allocate a virtual machine from a virtual machine pool | [
"POST /ovirt-engine/api/vmpools/2d2d5e26-1b6e-11e1-8cda-001320f76e8e/allocatevm HTTP/1.1 Accept: application/xml Content-type: application/xml <action/>"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-actions7 |
Chapter 14. Installing Red Hat Process Automation Manager from ZIP files | Chapter 14. Installing Red Hat Process Automation Manager from ZIP files You can use the Red Hat Process Automation Manager ZIP files (one for Business Central and one for KIE Server) to install Red Hat Process Automation Manager without using the installer. Note You should install Business Central and KIE Server on different servers in production environments. For information about installing the headless Process Automation Manager controller, see Chapter 19, Installing and running the headless Process Automation Manager controller . 14.1. Installing Business Central from the ZIP file Business Central is the graphical user interface where you create and manage business rules that KIE Server executes. You can use a deployable ZIP file to install and configure Business Central. Prerequisites A backed-up Red Hat JBoss EAP installation version 7.4 is available. The base directory of the Red Hat JBoss EAP installation is referred to as EAP_HOME . Sufficient user permissions to complete the installation are granted. The following file is downloaded as described in Chapter 12, Downloading the Red Hat Process Automation Manager installation files : rhpam-7.13.5-business-central-eap7-deployable.zip Procedure Extract the rhpam-7.13.5-business-central-eap7-deployable.zip file to a temporary directory. In the following examples this directory is called TEMP_DIR . Copy the contents of the TEMP_DIR /rhpam-7.13.5-business-central-eap7-deployable/jboss-eap-7.4 directory to EAP_HOME . When prompted, merge or replace files. Warning Ensure that the names of the Red Hat Process Automation Manager deployments that you copy do not conflict with your existing deployments in the Red Hat JBoss EAP instance. 14.2. Installing KIE Server from the ZIP file KIE Server provides the runtime environment for business assets and accesses the data stored in the assets repository (knowledge store). You can use a deployable ZIP file to install and configure KIE Server. Prerequisites A backed-up Red Hat JBoss EAP installation version 7.4 is available. The base directory of the Red Hat JBoss EAP installation is referred to as EAP_HOME . Sufficient user permissions to complete the installation are granted. The following file is downloaded as described in Chapter 12, Downloading the Red Hat Process Automation Manager installation files : rhpam-7.13.5-kie-server-ee8.zip Procedure Extract the rhpam-7.13.5-kie-server-ee8.zip archive to a temporary directory. In the following examples this directory is called TEMP_DIR . Copy the TEMP_DIR /rhpam-7.13.5-kie-server-ee8/kie-server.war directory to EAP_HOME /standalone/deployments/ . Warning Ensure the names of the Red Hat Decision Manager deployments you copy do not conflict with your existing deployments in the Red Hat JBoss EAP instance. Copy the contents of the TEMP_DIR /rhpam-7.13.5-kie-server-ee8/rhpam-7.13.5-kie-server-ee8/SecurityPolicy/ to EAP_HOME /bin . When prompted to overwrite files, click Replace . In the EAP_HOME /standalone/deployments/ directory, create an empty file named kie-server.war.dodeploy . This file ensures that KIE Server is automatically deployed when the server starts. 14.3. Creating users If you used the deployable ZIP files to install Red Hat Process Automation Manager, before you can log in to Business Central or KIE Server, you must create users. This section shows you how to create a Business Central user with the admin , rest-all , and kie-server roles and a KIE Server user that has the kie-server role. For information about roles, see Chapter 11, Red Hat Decision Manager roles and users . Note Red Hat Decision Manager stores user data as a set of properties or as a set of files. File-based storage provides several extra features, such as SSH login and a user maintenance UI. The user script examples in this documentation use the file-based user script, jboss-cli.sh , instead of the property-based user script, add-users.sh . Prerequisites Red Hat Process Automation Manager is installed in the base directory of the Red Hat JBoss EAP installation ( EAP_HOME ). Procedure Optional: To change Red Hat Process Automation Manager from using property-based user storage to file-based user storage, complete the following steps: Run the following command to apply the kie-fs-realm patch: Open each kie-fs-realm-users/*/<USER>.xml file where <USER> is a Red Hat Process Automation Manager user. Replace <attribute name="roles" value= with <attribute name="role" value= . In a terminal application, navigate to the EAP_HOME /bin directory. Create a user with the admin , rest-all , and kie-server roles. Note Users with the admin role are Business Central administrators. Users with rest-all role can access Business Central REST capabilities. Users with the kie-server role can access KIE Server (KIE Server) REST capabilities. In the following command, replace <USERNAME> and <PASSWORD> with the user name and password of your choice: USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[admin,rest-all,kie-server])" Note Make sure that the specified user name is not the same as an existing user, role, or group. For example, do not create a user with the user name admin . The password must have at least eight characters and must contain at least one number and one non-alphanumeric character, but not & (ampersand). Create a user with the kie-server role that you will use to log in to KIE Server. USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[kie-server])" Make a note of your user names and passwords. Optional: If you installed Business Central and KIE Server in the same server instance, you can create a single user that has both of these roles: USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[admin,rest-all,kie-server])" Note You should install Business Central and KIE Server on different servers in production environments. Optional: To create several users at one time, create a file that contains the user data and run the following command, where <USER_DATA>.cli is the file that contains the user data: USD ./bin/jboss-cli.sh --file=<USER_DATA>.cli The <USER_DATA>.cli file should contain data similar to the following example: embed-server --std-out=echo # first user /subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>) /subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}) /subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[admin,role,group]) # second user ... 14.4. Configuring KIE Server to connect to Business Central Warning This section provides a sample setup that you can use for testing purposes. Some of the values are unsuitable for a production environment, and are marked as such. If a KIE Server is not configured in your Red Hat Process Automation Manager environment, or if you require additional KIE Servers in your Red Hat Process Automation Manager environment, you must configure a KIE Server to connect to Business Central. Note If you are deploying KIE Server on Red Hat OpenShift Container Platform, see the Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 4 using Operators document for instructions about configuring it to connect to Business Central. Prerequisites Business Central and KIE Server are installed in the base directory of the Red Hat JBoss EAP installation ( EAP_HOME ) as described in the following sections: Section 14.1, "Installing Business Central from the ZIP file" Section 14.2, "Installing KIE Server from the ZIP file" Users with the following roles exist: In Business Central, a user with the role rest-all On KIE Server, a user with the role kie-server For more information, see Section 14.3, "Creating users" . Procedure In your Red Hat Process Automation Manager installation directory, navigate to the standalone-full.xml file. For example, if you use a Red Hat JBoss EAP installation for Red Hat Process Automation Manager, go to USDEAP_HOME/standalone/configuration/standalone-full.xml . Open the standalone-full.xml file and under the <system-properties> tag, set the following JVM properties: Table 14.1. JVM Properties for the managed KIE Server instance Property Value Note org.kie.server.id default-kie-server The KIE Server ID. org.kie.server.controller http://localhost:8080/business-central/rest/controller The location of Business Central. The URL for connecting to the API of Business Central. org.kie.server.controller.user controllerUser The user name with the role rest-all who can log in to the Business Central. org.kie.server.controller.pwd controllerUser1234; The password of the user who can log in to the Business Central. org.kie.server.location http://localhost:8080/kie-server/services/rest/server The location of KIE Server. The URL for connecting to the API of KIE Server. Table 14.2. JVM Properties for the Business Central instance Property Value Note org.kie.server.user controllerUser The user name with the role kie-server . org.kie.server.pwd controllerUser1234; The password of the user. The following example shows how to configure a KIE Server instance: <property name="org.kie.server.id" value="default-kie-server"/> <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"/> <property name="org.kie.server.controller.user" value="controllerUser"/> <property name="org.kie.server.controller.pwd" value="controllerUser1234;"/> <property name="org.kie.server.location" value="http://localhost:8080/kie-server/services/rest/server"/> The following example shows how to configure a for Business Central instance: <property name="org.kie.server.user" value="controllerUser"/> <property name="org.kie.server.pwd" value="controllerUser1234;"/> To verify that KIE Server starts successfully, send a GET request to http:// SERVER:PORT /kie-server/services/rest/server/ when KIE Server is running. For more information about running Red Hat Process Automation Manager on KIE Server, see Running Red Hat Process Automation Manager . After successful authentication, you receive an XML response similar to the following example: <response type="SUCCESS" msg="Kie Server info"> <kie-server-info> <capabilities>KieServer</capabilities> <capabilities>BRM</capabilities> <capabilities>BPM</capabilities> <capabilities>CaseMgmt</capabilities> <capabilities>BPM-UI</capabilities> <capabilities>BRP</capabilities> <capabilities>DMN</capabilities> <capabilities>Swagger</capabilities> <location>http://localhost:8230/kie-server/services/rest/server</location> <messages> <content>Server KieServerInfo{serverId='first-kie-server', version='7.5.1.Final-redhat-1', location='http://localhost:8230/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]}started successfully at Mon Feb 05 15:44:35 AEST 2018</content> <severity>INFO</severity> <timestamp>2018-02-05T15:44:35.355+10:00</timestamp> </messages> <name>first-kie-server</name> <id>first-kie-server</id> <version>7.5.1.Final-redhat-1</version> </kie-server-info> </response> Verify successful registration: Log in to Business Central. Click Menu Deploy Execution Servers . If registration is successful, you will see the registered server ID. 14.5. Thread efficiency To ensure that the optimal number of threads are used, set the value of the threading system properties to the sum of the number of CPUs plus one. In your Red Hat Process Automation Manager installation directory, navigate to the standalone-full.xml file. For example, if you use a Red Hat JBoss EAP installation for Red Hat Process Automation Manager, go to USDEAP_HOME/standalone/configuration/standalone-full.xml . Open the standalone-full.xml file. Under the <system-properties> tag, set the value of the following JVM properties to the number of CPUs plus one: org.appformer.concurrent.managed.thread.limit org.appformer.concurrent.unmanaged.thread.limit org.appformer.concurrent.indexing.thread.limit org.appformer.concurrent.rest.api.thread.limit Note The number of CPUs plus one is a valid baseline value for all properties. You might have to fine-tune further based on additional testing. | [
"./bin/elytron-tool.sh filesystem-realm --users-file application-users.properties --roles-file application-roles.properties --output-location kie-fs-realm-users",
"./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[admin,rest-all,kie-server])\"",
"./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[kie-server])\"",
"./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[admin,rest-all,kie-server])\"",
"./bin/jboss-cli.sh --file=<USER_DATA>.cli",
"embed-server --std-out=echo first user /subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>) /subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}) /subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[admin,role,group]) second user",
"<property name=\"org.kie.server.id\" value=\"default-kie-server\"/> <property name=\"org.kie.server.controller\" value=\"http://localhost:8080/business-central/rest/controller\"/> <property name=\"org.kie.server.controller.user\" value=\"controllerUser\"/> <property name=\"org.kie.server.controller.pwd\" value=\"controllerUser1234;\"/> <property name=\"org.kie.server.location\" value=\"http://localhost:8080/kie-server/services/rest/server\"/>",
"<property name=\"org.kie.server.user\" value=\"controllerUser\"/> <property name=\"org.kie.server.pwd\" value=\"controllerUser1234;\"/>",
"<response type=\"SUCCESS\" msg=\"Kie Server info\"> <kie-server-info> <capabilities>KieServer</capabilities> <capabilities>BRM</capabilities> <capabilities>BPM</capabilities> <capabilities>CaseMgmt</capabilities> <capabilities>BPM-UI</capabilities> <capabilities>BRP</capabilities> <capabilities>DMN</capabilities> <capabilities>Swagger</capabilities> <location>http://localhost:8230/kie-server/services/rest/server</location> <messages> <content>Server KieServerInfo{serverId='first-kie-server', version='7.5.1.Final-redhat-1', location='http://localhost:8230/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]}started successfully at Mon Feb 05 15:44:35 AEST 2018</content> <severity>INFO</severity> <timestamp>2018-02-05T15:44:35.355+10:00</timestamp> </messages> <name>first-kie-server</name> <id>first-kie-server</id> <version>7.5.1.Final-redhat-1</version> </kie-server-info> </response>",
"org.appformer.concurrent.managed.thread.limit org.appformer.concurrent.unmanaged.thread.limit org.appformer.concurrent.indexing.thread.limit org.appformer.concurrent.rest.api.thread.limit"
]
| https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/assembly_installing-on-eap-deployable_install-on-eap |
7.65. gnutls | 7.65. gnutls 7.65.1. RHSA-2015:1457 - Moderate: gnutls security and bug fix update Updated gnutls packages that fix three security issues and one bug are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. The GnuTLS library provides support for cryptographic algorithms and for protocols such as Transport Layer Security (TLS). Security Fixes CVE-2014-8155 It was found that GnuTLS did not check activation and expiration dates of CA certificates. This could cause an application using GnuTLS to incorrectly accept a certificate as valid when its issuing CA is already expired. CVE-2015-0282 It was found that GnuTLS did not verify whether a hashing algorithm listed in a signature matched the hashing algorithm listed in the certificate. An attacker could create a certificate that used a different hashing algorithm than it claimed, possibly causing GnuTLS to use an insecure, disallowed hashing algorithm during certificate verification. CVE-2015-0294 It was discovered that GnuTLS did not check if all sections of X.509 certificates indicate the same signature algorithm. This flaw, in combination with a different flaw, could possibly lead to a bypass of the certificate signature check. The CVE-2014-8155 issue was discovered by Marcel Kolaja of Red Hat. The CVE-2015-0282 and CVE-2015-0294 issues were discovered by Nikos Mavrogiannopoulos of the Red Hat Security Technologies Team. Bug Fix BZ# 1036385 Previously, under certain circumstances, the certtool utility could generate X.509 certificates which contained a negative modulus. Consequently, such certificates could have interoperation problems with the software using them. The bug has been fixed, and certtool no longer generates X.509 certificates containing a negative modulus. Users of gnutls are advised to upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-gnutls |
Chapter 5. Removed functionality | Chapter 5. Removed functionality An overview of deprecated functionality in all supported releases up to this release of Red Hat Trusted Artifact Signer. Removed the Dependency Analytics report from Red Hat Trusted Profile Analyzer With this release, we removed the Dependency Analytics report functionality from Red Hat's Trusted Profile Analyzer product. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1.3/html/release_notes/removed-functionality |
Chapter 9. Management of MDS service using the Ceph Orchestrator | Chapter 9. Management of MDS service using the Ceph Orchestrator As a storage administrator, you can use Ceph Orchestrator with Cephadm in the backend to deploy the MDS service. By default, a Ceph File System (CephFS) uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons. This section covers the following administrative tasks: Deploying the MDS service using the command line interface . Deploying the MDS service using the service specification . Removing the MDS service using the Ceph Orchestrator . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor and OSD daemons are deployed. 9.1. Deploying the MDS service using the command line interface Using the Ceph Orchestrator, you can deploy the Metadata Server (MDS) service using the placement specification in the command line interface. Ceph File System (CephFS) requires one or more MDS. Note Ensure you have at least two pools, one for Ceph file system (CephFS) data and one for CephFS metadata. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example There are two ways of deploying MDS daemons using placement specification: Method 1 Use ceph fs volume to create the MDS daemons. This creates the CephFS volume and pools associated with the CephFS, and also starts the MDS service on the hosts. Syntax Note By default, replicated pools are created for this command. Example Method 2 Create the pools, CephFS, and then deploy MDS service using placement specification: Create the pools for CephFS: Syntax Example Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it generally has far fewer objects than the data pool. It is possible to increase the number of PGs if needed. The pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system. Important For the metadata pool, consider to use: A higher replication level because any data loss to this pool can make the whole file system inaccessible. Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients. Create the file system for the data pools and metadata pools: Syntax Example Deploy MDS service using the ceph orch apply command: Syntax Example Verification List the service: Example Check the CephFS status: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph File System (CephFS). For information on setting the pool values, see Setting number of placement groups in a pool . 9.2. Deploying the MDS service using the service specification Using the Ceph Orchestrator, you can deploy the MDS service using the service specification. Note Ensure you have at least two pools, one for the Ceph File System (CephFS) data and one for the CephFS metadata. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure Create the mds.yaml file: Example Edit the mds.yaml file to include the following details: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Log into the Cephadm shell: Example Navigate to the following directory: Example Deploy MDS service using service specification: Syntax Example Once the MDS services is deployed and functional, create the CephFS: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph File System (CephFS). 9.3. Removing the MDS service using the Ceph Orchestrator You can remove the service using the ceph orch rm command. Alternatively, you can remove the file system and the associated pools. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one MDS daemon deployed on the hosts. Procedure There are two ways of removing MDS daemons from the cluster: Method 1 Remove the CephFS volume, associated pools, and the services: Log into the Cephadm shell: Example Set the configuration parameter mon_allow_pool_delete to true : Example Remove the file system: Syntax Example This command will remove the file system, its data, and metadata pools. It also tries to remove the MDS using the enabled ceph-mgr Orchestrator module. Method 2 Use the ceph orch rm command to remove the MDS service from the entire cluster: List the service: Example Remove the service Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the MDS service using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the MDS service using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. | [
"cephadm shell",
"ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph fs volume create test --placement=\"2 host01 host02\"",
"ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]",
"ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64",
"ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL",
"ceph fs new test cephfs_metadata cephfs_data",
"ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mds test --placement=\"2 host01 host02\"",
"ceph orch ls",
"ceph fs ls ceph fs status",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mds",
"touch mds.yaml",
"service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3",
"service_type: mds service_id: fs_name placement: hosts: - host01 - host02",
"cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml",
"cd /var/lib/ceph/mds/",
"cephadm shell",
"cd /var/lib/ceph/mds/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i mds.yaml",
"ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL",
"ceph fs new test metadata_pool data_pool",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mds",
"cephadm shell",
"ceph config set mon mon_allow_pool_delete true",
"ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it",
"ceph fs volume rm cephfs-new --yes-i-really-mean-it",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm mds.test",
"ceph orch ps",
"ceph orch ps"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/operations_guide/management-of-mds-service-using-the-ceph-orchestrator |
Chapter 21. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks | Chapter 21. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. It includes support for Identity Management (IdM). Learn more about Identity Management (IdM) host-based access policies and how to define them using Ansible . 21.1. Host-based access control rules in IdM Host-based access control (HBAC) rules define which users or user groups can access which hosts or host groups by using which services or services in a service group. As a system administrator, you can use HBAC rules to achieve the following goals: Limit access to a specified system in your domain to members of a specific user group. Allow only a specific service to be used to access systems in your domain. By default, IdM is configured with a default HBAC rule named allow_all , which means universal access to every host for every user via every relevant service in the entire IdM domain. You can fine-tune access to different hosts by replacing the default allow_all rule with your own set of HBAC rules. For centralized and simplified access control management, you can apply HBAC rules to user groups, host groups, or service groups instead of individual users, hosts, or services. 21.2. Ensuring the presence of an HBAC rule in IdM using an Ansible playbook Follow this procedure to ensure the presence of a host-based access control (HBAC) rule in Identity Management (IdM) using an Ansible playbook. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The users and user groups you want to use for your HBAC rule exist in IdM. See Managing user accounts using Ansible playbooks and Ensuring the presence of IdM groups and group members using Ansible playbooks for details. The hosts and host groups to which you want to apply your HBAC rule exist in IdM. See Managing hosts using Ansible playbooks and Managing host groups using Ansible playbooks for details. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create your Ansible playbook file that defines the HBAC policy whose presence you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/hbacrule/ensure-hbacrule-allhosts-present.yml file: Run the playbook: Verification Log in to the IdM Web UI as administrator. Navigate to Policy Host-Based-Access-Control HBAC Test . In the Who tab, select idm_user. In the Accessing tab, select client.idm.example.com . In the Via service tab, select sshd . In the Rules tab, select login . In the Run test tab, click the Run test button. If you see ACCESS GRANTED, the HBAC rule is implemented successfully. Additional resources See the README-hbacsvc.md , README-hbacsvcgroup.md , and README-hbacrule.md files in the /usr/share/doc/ansible-freeipa directory. See the playbooks in the subdirectories of the /usr/share/doc/ansible-freeipa/playbooks directory. | [
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hbacrules hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure idm_user can access client.idm.example.com via the sshd service - ipahbacrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: login user: idm_user host: client.idm.example.com hbacsvc: - sshd state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-new-hbacrule-present.yml"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/ensuring-the-presence-of-host-based-access-control-rules-in-idm-using-Ansible-playbooks_using-ansible-to-install-and-manage-idm |
Chapter 9. Configuring KIE Server to connect to Business Central | Chapter 9. Configuring KIE Server to connect to Business Central Warning This section provides a sample setup that you can use for testing purposes. Some of the values are unsuitable for a production environment, and are marked as such. If a KIE Server is not configured in your Red Hat Process Automation Manager environment, or if you require additional KIE Servers in your Red Hat Process Automation Manager environment, you must configure a KIE Server to connect to Business Central. Note If you are deploying KIE Server on Red Hat OpenShift Container Platform, see the Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators document for instructions about configuring it to connect to Business Central. KIE Server can be managed or unmanaged. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain the KIE containers. Note Make the changes described in this section if KIE Server is managed by Business Central and you have installed Red Hat Process Automation Manager from the ZIP files. If you have installed Business Central, you can use the headless Process Automation Manager controller to manage KIE Server, as described in Chapter 10, Installing and running the headless Process Automation Manager controller . Prerequisites Business Central and KIE Server are installed in the base directory of the Red Hat JBoss EAP installation ( EAP_HOME ). Note You must install Business Central and KIE Server on different servers in production environments. In this sample situation, we use only one user named controllerUser , containing both rest-all and the kie-server roles. However, if you install KIE Server and Business Central on the same server, for example in a development environment, make the changes in the shared standalone-full.xml file as described in this section. Users with the following roles exist: In Business Central, a user with the role rest-all On KIE Server, a user with the role kie-server Procedure In your Red Hat Process Automation Manager installation directory, navigate to the standalone-full.xml file. For example, if you use a Red Hat JBoss EAP installation for Red Hat Process Automation Manager, go to USDEAP_HOME/standalone/configuration/standalone-full.xml . Open the standalone-full.xml file and under the <system-properties> tag, set the following JVM properties: Table 9.1. JVM Properties for the KIE Server instance Property Value Note org.kie.server.id default-kie-server The KIE Server ID. org.kie.server.controller http://localhost:8080/business-central/rest/controller The location of Business Central. The URL for connecting to the API of Business Central. org.kie.server.controller.user controllerUser The user name with the role rest-all who can log in to the Business Central. org.kie.server.controller.pwd controllerUser1234; The password of the user who can log in to the Business Central. org.kie.server.location http://localhost:8080/kie-server/services/rest/server The location of KIE Server. The URL for connecting to the API of KIE Server. Table 9.2. JVM Properties for the Business Central instance Property Value Note org.kie.server.user controllerUser The user name with the role kie-server . org.kie.server.pwd controllerUser1234; The password of the user. The following example shows how to configure a KIE Server instance: <property name="org.kie.server.id" value="default-kie-server"/> <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"/> <property name="org.kie.server.controller.user" value="controllerUser"/> <property name="org.kie.server.controller.pwd" value="controllerUser1234;"/> <property name="org.kie.server.location" value="http://localhost:8080/kie-server/services/rest/server"/> The following example shows how to configure a for Business Central instance: <property name="org.kie.server.user" value="controllerUser"/> <property name="org.kie.server.pwd" value="controllerUser1234;"/> To verify that KIE Server starts successfully, send a GET request to http:// SERVER:PORT /kie-server/services/rest/server/ when KIE Server is running. For more information about running Red Hat Process Automation Manager on KIE Server, see Running Red Hat Process Automation Manager . After successful authentication, you receive an XML response similar to the following example: <response type="SUCCESS" msg="Kie Server info"> <kie-server-info> <capabilities>KieServer</capabilities> <capabilities>BRM</capabilities> <capabilities>BPM</capabilities> <capabilities>CaseMgmt</capabilities> <capabilities>BPM-UI</capabilities> <capabilities>BRP</capabilities> <capabilities>DMN</capabilities> <capabilities>Swagger</capabilities> <location>http://localhost:8230/kie-server/services/rest/server</location> <messages> <content>Server KieServerInfo{serverId='first-kie-server', version='7.5.1.Final-redhat-1', location='http://localhost:8230/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]}started successfully at Mon Feb 05 15:44:35 AEST 2018</content> <severity>INFO</severity> <timestamp>2018-02-05T15:44:35.355+10:00</timestamp> </messages> <name>first-kie-server</name> <id>first-kie-server</id> <version>7.5.1.Final-redhat-1</version> </kie-server-info> </response> Verify successful registration: Log in to Business Central. Click Menu Deploy Execution Servers . If registration is successful, you will see the registered server ID. | [
"<property name=\"org.kie.server.id\" value=\"default-kie-server\"/> <property name=\"org.kie.server.controller\" value=\"http://localhost:8080/business-central/rest/controller\"/> <property name=\"org.kie.server.controller.user\" value=\"controllerUser\"/> <property name=\"org.kie.server.controller.pwd\" value=\"controllerUser1234;\"/> <property name=\"org.kie.server.location\" value=\"http://localhost:8080/kie-server/services/rest/server\"/>",
"<property name=\"org.kie.server.user\" value=\"controllerUser\"/> <property name=\"org.kie.server.pwd\" value=\"controllerUser1234;\"/>",
"<response type=\"SUCCESS\" msg=\"Kie Server info\"> <kie-server-info> <capabilities>KieServer</capabilities> <capabilities>BRM</capabilities> <capabilities>BPM</capabilities> <capabilities>CaseMgmt</capabilities> <capabilities>BPM-UI</capabilities> <capabilities>BRP</capabilities> <capabilities>DMN</capabilities> <capabilities>Swagger</capabilities> <location>http://localhost:8230/kie-server/services/rest/server</location> <messages> <content>Server KieServerInfo{serverId='first-kie-server', version='7.5.1.Final-redhat-1', location='http://localhost:8230/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]}started successfully at Mon Feb 05 15:44:35 AEST 2018</content> <severity>INFO</severity> <timestamp>2018-02-05T15:44:35.355+10:00</timestamp> </messages> <name>first-kie-server</name> <id>first-kie-server</id> <version>7.5.1.Final-redhat-1</version> </kie-server-info> </response>"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/kie-server-configure-central-proc_execution-server |
Chapter 11. Preparing for users | Chapter 11. Preparing for users After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including taking steps to prepare for users. 11.1. Understanding identity provider configuration The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 11.1.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 11.1.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . After you define an identity provider, you can use RBAC to define and apply permissions . 11.1.3. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 11.1.4. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 11.2. Using RBAC to define and apply permissions Understand and apply role-based access control. 11.2.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using: Authorization object Description Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods. Roles Collections of rules. You can associate, or bind, users and groups to multiple roles. Bindings Associations between users and/or groups with a role. There are two levels of RBAC roles and bindings that control authorization: RBAC level Description Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation. This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles. During evaluation, both the cluster role bindings and the local role bindings are used. For example: Cluster-wide "allow" rules are checked. Locally-bound "allow" rules are checked. Deny by default. 11.2.1.1. Default cluster roles OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally. Important It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. Default cluster role Description admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota. basic-user A user that can get basic information about projects and users. cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. cluster-status A user that can get basic cluster status information. cluster-reader A user that can get or view most of the objects but cannot modify them. edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. self-provisioner A user that can create their own projects. view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin , plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin . The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below. Warning The get pods/exec , get pods/* , and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges . 11.2.1.2. Evaluating authorization OpenShift Container Platform evaluates authorization by using: Identity The user name and list of groups that the user belongs to. Action The action you perform. In most cases, this consists of: Project : The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities. Verb : The action itself: get , list , create , update , delete , deletecollection , or watch . Resource name : The API endpoint that you access. Bindings The full list of bindings, the associations between users or groups with a role. OpenShift Container Platform evaluates authorization by using the following steps: The identity and the project-scoped action is used to find all bindings that apply to the user or their groups. Bindings are used to locate all the roles that apply. Roles are used to find all the rules that apply. The action is checked against each rule to find a match. If no matching rule is found, the action is then denied by default. Tip Remember that users and groups can be associated with, or bound to, multiple roles at the same time. Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with. Important The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin . Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. 11.2.1.2.1. Cluster role aggregation The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources. 11.2.2. Projects and namespaces A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces. Namespaces provide a unique scope for: Named resources to avoid basic naming collisions. Delegated management authority to trusted users. The ability to limit community resource consumption. Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users. A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects. Projects can have a separate name , displayName , and description . The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. The optional displayName is how the project is displayed in the web console (defaults to name ). The optional description can be a more detailed description of the project and is also visible in the web console. Each project scopes its own set of: Object Description Objects Pods, services, replication controllers, etc. Policies Rules for which users can or cannot perform actions on objects. Constraints Quotas for each kind of object that can be limited. Service accounts Service accounts act automatically with designated access to objects in the project. Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects. Developers and administrators can interact with projects by using the CLI or the web console. 11.2.3. Default projects OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. 11.2.4. Viewing cluster roles and bindings You can use the oc CLI to view cluster roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the cluster roles and bindings. Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings. Procedure To view the cluster roles and their associated rule sets: USD oc describe clusterrole.rbac Example output Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ... To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles: USD oc describe clusterrolebinding.rbac Example output Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ... 11.2.5. Viewing local roles and bindings You can use the oc CLI to view local roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the local roles and bindings: Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. Users with the admin default cluster role bound locally can view and manage roles and bindings in that project. Procedure To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project: USD oc describe rolebinding.rbac To view the local role bindings for a different project, add the -n flag to the command: USD oc describe rolebinding.rbac -n joe-project Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project 11.2.6. Adding roles to users You can use the oc adm administrator CLI to manage the roles and bindings. Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands. You can bind any of the default cluster roles to local users or groups in your project. Procedure Add a role to a user in a specific project: USD oc adm policy add-role-to-user <role> <user> -n <project> For example, you can add the admin role to the alice user in joe project by running: USD oc adm policy add-role-to-user admin alice -n joe Tip You can alternatively apply the following YAML to add the role to the user: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice View the local role bindings and verify the addition in the output: USD oc describe rolebinding.rbac -n <project> For example, to view the local role bindings for the joe project: USD oc describe rolebinding.rbac -n joe Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe 1 The alice user has been added to the admins RoleBinding . 11.2.7. Creating a local role You can create a local role for a project and then bind it to a user. Procedure To create a local role for a project, run the following command: USD oc create role <name> --verb=<verb> --resource=<resource> -n <project> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to <project> , the project name For example, to create a local role that allows a user to view pods in the blue project, run the following command: USD oc create role podview --verb=get --resource=pod -n blue To bind the new role to a user, run the following command: USD oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue 11.2.8. Creating a cluster role You can create a cluster role. Procedure To create a cluster role, run the following command: USD oc create clusterrole <name> --verb=<verb> --resource=<resource> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to For example, to create a cluster role that allows a user to view pods, run the following command: USD oc create clusterrole podviewonly --verb=get --resource=pod 11.2.9. Local role binding commands When you manage a user or group's associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used. You can use the following commands for local RBAC management. Table 11.1. Local role binding operations Command Description USD oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a resource. USD oc adm policy add-role-to-user <role> <username> Binds a specified role to specified users in the current project. USD oc adm policy remove-role-from-user <role> <username> Removes a given role from specified users in the current project. USD oc adm policy remove-user <username> Removes specified users and all of their roles in the current project. USD oc adm policy add-role-to-group <role> <groupname> Binds a given role to specified groups in the current project. USD oc adm policy remove-role-from-group <role> <groupname> Removes a given role from specified groups in the current project. USD oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the current project. 11.2.10. Cluster role binding commands You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources. Table 11.2. Cluster role binding operations Command Description USD oc adm policy add-cluster-role-to-user <role> <username> Binds a given role to specified users for all projects in the cluster. USD oc adm policy remove-cluster-role-from-user <role> <username> Removes a given role from specified users for all projects in the cluster. USD oc adm policy add-cluster-role-to-group <role> <groupname> Binds a given role to specified groups for all projects in the cluster. USD oc adm policy remove-cluster-role-from-group <role> <groupname> Removes a given role from specified groups for all projects in the cluster. 11.2.11. Creating a cluster admin The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources. Prerequisites You must have created a user to define as the cluster admin. Procedure Define the user as a cluster admin: USD oc adm policy add-cluster-role-to-user cluster-admin <user> 11.3. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 11.3.1. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 11.4. Populating OperatorHub from mirrored Operator catalogs If you mirrored Operator catalogs for use with disconnected clusters, you can populate OperatorHub with the Operators from your mirrored catalogs. You can use the generated manifests from the mirroring process to create the required ImageContentSourcePolicy and CatalogSource objects. 11.4.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters 11.4.1.1. Creating the ImageContentSourcePolicy object After mirroring Operator catalog content to your mirror registry, create the required ImageContentSourcePolicy (ICSP) object. The ICSP object configures nodes to translate between the image references stored in Operator manifests and the mirrored registry. Procedure On a host with access to the disconnected cluster, create the ICSP by running the following command to specify the imageContentSourcePolicy.yaml file in your manifests directory: USD oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml where <path/to/manifests/dir> is the path to the manifests directory for your mirrored content. You can now create a CatalogSource object to reference your mirrored index image and Operator content. 11.4.1.2. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. Prerequisites You built and pushed an index image to a registry. You have access to the cluster as a user with the cluster-admin role. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.13 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 4 Specify your index image. If you specify a tag after the image name, for example :v4.13 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. 5 Specify your name or an organization name publishing the catalog. 6 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources Accessing images for Operators from private registries Image template for custom catalog sources Image pull policy 11.5. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a cluster administrator, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces... to make the Operator available to all users and projects. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. 11.5.1. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type jaeger to find the Jaeger Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Select one of the following: All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 11.5.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces mode, the openshift-operators namespace already has the appropriate global-operators Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources About OperatorGroups | [
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc describe clusterrole.rbac",
"Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]",
"oc describe clusterrolebinding.rbac",
"Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api",
"oc describe rolebinding.rbac",
"oc describe rolebinding.rbac -n joe-project",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project",
"oc adm policy add-role-to-user <role> <user> -n <project>",
"oc adm policy add-role-to-user admin alice -n joe",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc describe rolebinding.rbac -n <project>",
"oc describe rolebinding.rbac -n joe",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe",
"oc create role <name> --verb=<verb> --resource=<resource> -n <project>",
"oc create role podview --verb=get --resource=pod -n blue",
"oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue",
"oc create clusterrole <name> --verb=<verb> --resource=<resource>",
"oc create clusterrole podviewonly --verb=get --resource=pod",
"oc adm policy add-cluster-role-to-user cluster-admin <user>",
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc delete secrets kubeadmin -n kube-system",
"oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.13 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/post-installation_configuration/post-install-preparing-for-users |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/release_notes_and_known_issues/making-open-source-more-inclusive |
7.4. I/O Mode | 7.4. I/O Mode I/O mode options can be configured on a virtual machine during installation with virt-manager or the virt-install command, or on an existing guest by editing the guest XML configuration. Table 7.2. IO mode options Caching Option Description IO=native The default for Red Hat Enterprise Virtualization environments. This mode uses kernel asynchronous I/O with direct I/O options. IO=threads Sets the I/O mode to host user-mode based threads. IO=default Sets the I/O mode to the kernel default. In Red Hat Enterprise Linux 6, the default is IO=threads. In virt-manager , the I/O mode can be specified under Virtual Disk . For information on using virt-manager to change the I/O mode, see Section 3.4, "Virtual Disk Performance Options" To configure the I/O mode in the guest XML, use virsh edit to edit the io setting inside the driver tag, specifying native , threads , or default . For example, to set the I/O mode to threads : <disk type='file' device='disk'> <driver name='qemu' type='raw' io='threads'/> To configure the I/O mode when installing a guest using virt-install , add the io option to the --disk path parameter. For example, to configure io=threads during guest installation: | [
"<disk type='file' device='disk'> <driver name='qemu' type='raw' io='threads'/>",
"virt-install --disk path=/storage/images/USDNAME.img,io=threads,opt2=val2 ."
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-BlockIO-IO_mode |
Part IV. Servers | Part IV. Servers This part discusses how to set up servers normally required for networking. Note To monitor and administer servers through a web browser, see Managing systems using the RHEL 7 web console . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/part-servers |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_multiple_openshift_data_foundation_storage_clusters/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 5. OS/JVM certifications | Chapter 5. OS/JVM certifications This release is supported for use with the following operating system and Java Development Kit (JDK) versions: Operating System Chipset Architecture Java Virtual Machine Red Hat Enterprise Linux 9 x86_64 Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Red Hat Enterprise Linux 8 x86_64 Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Microsoft Windows 2019 Server x86_64 Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Note Red Hat Enterprise Linux 7 and Microsoft Windows 2016 Server are not supported. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_1_release_notes/os_jvm |
Chapter 371. XChange Component | Chapter 371. XChange Component Available as of Camel version 2.21 The xchange: component uses the XChange Java library to provide access to 60+ Bitcoin and Altcoin exchanges. It comes with a consistent interface for trading and accessing market data. Camel can get crypto currency market data, query historical data, place market orders and much more. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xchange</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 371.1. URI format xchange://exchange?options 371.2. Options The XChange component has no options. The XChange endpoint is configured using URI syntax: with the following path and query parameters: 371.2.1. Path Parameters (1 parameters): Name Description Default Type name Required The exchange to connect to String 371.2.2. Query Parameters (5 parameters): Name Description Default Type currency (producer) The currency Currency currencyPair (producer) The currency pair CurrencyPair method (producer) Required The method to execute XChangeMethod service (producer) Required The service to call XChangeService synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 371.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.xchange.enabled Whether to enable auto configuration of the xchange component. This is enabled by default. Boolean camel.component.xchange.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 371.4. Authentication This component communicates with supported crypto currency exchanges via REST API. Some API requests use simple unauthenticated GET request. For most of the interesting stuff however, you'd need an account with the exchange and have API access keys enabled. These API access keys need to be guarded tightly, especially so when they also allow for the withdraw functionality. In which case, anyone who can get hold of your API keys can easily transfer funds from your account to some other address i.e. steal your money. Your API access keys can be strored in an exchange specific properties file in your SSH directory. For Binance for example this would be: ~/.ssh/binance-secret.keys 371.5. Message Headers <TODO> <title>Samples</title> In this sample we find the current Bitcoin market price in USDT: from("direct:ticker").to("xchange:binance?service=market&method=ticker¤cyPair=BTC/USDT") </TODO> | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xchange</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"xchange://exchange?options",
"xchange:name",
"## This file MUST NEVER be commited to source control. It is therefore added to .gitignore. # apiKey = GuRW0********* secretKey = nKLki************",
"from(\"direct:ticker\").to(\"xchange:binance?service=market&method=ticker¤cyPair=BTC/USDT\")"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/xchange-component |
Chapter 11. Configuring sound in GNOME | Chapter 11. Configuring sound in GNOME You can configure sound volume and other sound options in GNOME. 11.1. Sound configuration tools in GNOME In RHEL 9, the PipeWire sound server handles sound output and input. PipeWire lets programs output the audio using the pipewire daemon. To configure sound, you can use one of the following graphical applications in GNOME: System menu The system menu is located in the top-right screen corner. It enables you only to set the intensity of the sound output or sound input through the sound bar. The sound bar for input sound is available only if you are running an application that is using an internal microphone (built-in audio), such as some teleconference tools. Settings Settings provides other general options to configure sound. Tweaks The Tweaks application enables you to configure only volume over-amplification. Additional resources For more information about PipeWire , see the pipewire man page on your system. 11.2. Accessing sound configuration in Settings This procedure opens the sound configuration screen in the Settings . Launch Settings . You can use one of the approaches described in Launching applications in GNOME . Alternatively, you can also launch it from the system menu by clicking on its icon. In Settings , choose Sound from the left vertical bar. 11.3. Sound options in Settings Through the Sound menu in Settings , you can configure the following sound options: Volume Levels The Volume levels section shows all currently running applications that can process sound, and allows you to amplify or lower the sound of a particular application. Output and Input The Output and Input sections show all built-in audio devices and external audio devices that are currently connected. Alert sound The Alert sound section shows different themes of system audio alerts. The Output section on the sound configuration screen | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/configuring-sound-in-gnome_customizing-the-gnome-desktop-environment |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_security_automation_guide/providing-feedback |
Chapter 6. Troubleshooting alerts and errors in OpenShift Data Foundation | Chapter 6. Troubleshooting alerts and errors in OpenShift Data Foundation 6.1. Resolving alerts and errors Red Hat OpenShift Data Foundation can detect and automatically resolve a number of common failure scenarios. However, some problems require administrator intervention. To know the errors currently firing, check one of the following locations: Observe Alerting Firing option Home Overview Cluster tab Storage Data Foundation Storage System storage system link in the pop up Overview Block and File tab Storage Data Foundation Storage System storage system link in the pop up Overview Object tab Copy the error displayed and search it in the following section to know its severity and resolution: Name : CephMonVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph Mon components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephOSDVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph OSD components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephClusterCriticallyFull Message : Storage cluster is critically full and needs immediate expansion Description : Storage cluster utilization has crossed 85%. Severity : Crtical Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : CephClusterNearFull Fixed : Storage cluster is nearing full. Expansion is required. Description : Storage cluster utilization has crossed 75%. Severity : Warning Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : NooBaaBucketErrorState Message : A NooBaa Bucket Is In Error State Description : A NooBaa bucket {{ USDlabels.bucket_name }} is in error state for more than 6m Severity : Warning Resolution : Workaround Procedure : Finding the error code of an unhealthy bucket Name : NooBaaNamespaceResourceErrorState Message : A NooBaa Namespace Resource Is In Error State Description : A NooBaa namespace resource {{ USDlabels.namespace_resource_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy namespace store resource Name : NooBaaNamespaceBucketErrorState Message : A NooBaa Namespace Bucket Is In Error State Description : A NooBaa namespace bucket {{ USDlabels.bucket_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy bucket Name : CephMdsMissingReplicas Message : Insufficient replicas for storage metadata service. Description : `Minimum required replicas for storage metadata service not available. Might affect the working of storage cluster.` Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, contact Red Hat support . Name : CephMgrIsAbsent Message : Storage metrics collector service not available anymore. Description : Ceph Manager has disappeared from Prometheus target discovery. Severity : Critical Resolution : Contact Red Hat support Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Once the upgrade is complete, check for alerts and operator status. If the issue persistents or cannot be identified, contact Red Hat support . Name : CephNodeDown Message : Storage node {{ USDlabels.node }} went down Description : Storage node {{ USDlabels.node }} went down. Please check the node immediately. Severity : Critical Resolution : Contact Red Hat support Procedure : Check which node stopped functioning and its cause. Take appropriate actions to recover the node. If node cannot be recovered: See Replacing storage nodes for Red Hat OpenShift Data Foundation Contact Red Hat support . Name : CephClusterErrorState Message : Storage cluster is in error state Description : Storage cluster is in error state for more than 10m. Severity : Critical Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephClusterWarningState Message : Storage cluster is in degraded state Description : Storage cluster is in warning state for more than 10m. Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephDataRecoveryTakingTooLong Message : Data recovery is slow Description : Data recovery has been active for too long. Severity : Warning Resolution : Contact Red Hat support Name : CephOSDDiskNotResponding Message : Disk not responding Description : Disk device {{ USDlabels.device }} not responding, on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephOSDDiskUnavailable Message : Disk not accessible Description : Disk device {{ USDlabels.device }} not accessible on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephPGRepairTakingTooLong Message : Self heal problems detected Description : Self heal operations taking too long. Severity : Warning Resolution : Contact Red Hat support Name : CephMonHighNumberOfLeaderChanges Message : Storage Cluster has seen many leader changes recently. Description : 'Ceph Monitor "{{ USDlabels.job }}": instance {{ USDlabels.instance }} has seen {{ USDvalue printf "%.2f" }} leader changes per minute recently.' Severity : Warning Resolution : Contact Red Hat support Name : CephMonQuorumAtRisk Message : Storage quorum at risk Description : Storage cluster quorum is low. Severity : Critical Resolution : Contact Red Hat support Name : ClusterObjectStoreState Message : Cluster Object Store is in unhealthy state. Please check Ceph cluster health . Description : Cluster Object Store is in unhealthy state for more than 15s. Please check Ceph cluster health . Severity : Critical Resolution : Contact Red Hat support Procedure : Check the CephObjectStore CR instance. Contact Red Hat support . Name : CephOSDFlapping Message : Storage daemon osd.x has restarted 5 times in the last 5 minutes. Please check the pod events or Ceph status to find out the cause . Description : Storage OSD restarts more than 5 times in 5 minutes . Severity : Critical Resolution : Contact Red Hat support Name : OdfPoolMirroringImageHealth Message : Mirroring image(s) (PV) in the pool <pool-name> are in Warning state for more than a 1m. Mirroring might not work as expected. Description : Disaster recovery is failing for one or a few applications. Severity : Warning Resolution : Contact Red Hat support Name : OdfMirrorDaemonStatus Message : Mirror daemon is unhealthy . Description : Disaster recovery is failing for the entire cluster. Mirror daemon is in unhealthy status for more than 1m. Mirroring on this cluster is not working as expected. Severity : Critical Resolution : Contact Red Hat support 6.2. Resolving cluster health issues There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health code below for more information and troubleshooting. Health code Description MON_DISK_LOW One or more Ceph Monitors are low on disk space. 6.2.1. MON_DISK_LOW This alert triggers if the available space on the file system storing the monitor database as a percentage, drops below mon_data_avail_warn (default: 15%). This may indicate that some other process or user on the system is filling up the same file system used by the monitor. It may also indicate that the monitor's database is large. Note The paths to the file system differ depending on the deployment of your mons. You can find the path to where the mon is deployed in storagecluster.yaml . Example paths: Mon deployed over PVC path: /var/lib/ceph/mon Mon deployed over hostpath: /var/lib/rook/mon In order to clear up space, view the high usage files in the file system and choose which to delete. To view the files, run: Replace <path-in-the-mon-node> with the path to the file system where mons are deployed. 6.3. Resolving cluster alerts There is a finite set of possible health alerts that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health alerts which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health alert for more information and troubleshooting. Table 6.1. Types of cluster health alerts Health alert Overview CephClusterCriticallyFull Storage cluster utilization has crossed 80%. CephClusterErrorState Storage cluster is in an error state for more than 10 minutes. CephClusterNearFull Storage cluster is nearing full capacity. Data deletion or cluster expansion is required. CephClusterReadOnly Storage cluster is read-only now and needs immediate data deletion or cluster expansion. CephClusterWarningState Storage cluster is in a warning state for more than 10 mins. CephDataRecoveryTakingTooLong Data recovery has been active for too long. CephMdsMissingReplicas Minimum required replicas for storage metadata service not available. Might affect the working of the storage cluster. CephMgrIsAbsent Ceph Manager has disappeared from Prometheus target discovery. CephMgrIsMissingReplicas Ceph manager is missing replicas. Thispts health status reporting and will cause some of the information reported by the ceph status command to be missing or stale. In addition, the Ceph manager is responsible for a manager framework aimed at expanding the existing capabilities of Ceph. CephMonHighNumberOfLeaderChanges The Ceph monitor leader is being changed an unusual number of times. CephMonQuorumAtRisk Storage cluster quorum is low. CephMonQuorumLost The number of monitor pods in the storage cluster are not enough. CephMonVersionMismatch There are different versions of Ceph Mon components running. CephNodeDown A storage node went down. Check the node immediately. The alert should contain the node name. CephOSDCriticallyFull Utilization of back-end Object Storage Device (OSD) has crossed 80%. Free up some space immediately or expand the storage cluster or contact support. CephOSDDiskNotResponding A disk device is not responding on one of the hosts. CephOSDDiskUnavailable A disk device is not accessible on one of the hosts. CephOSDFlapping Ceph storage OSD flapping. CephOSDNearFull One of the OSD storage devices is nearing full. CephOSDSlowOps OSD requests are taking too long to process. CephOSDVersionMismatch There are different versions of Ceph OSD components running. CephPGRepairTakingTooLong Self-healing operations are taking too long. CephPoolQuotaBytesCriticallyExhausted Storage pool quota usage has crossed 90%. CephPoolQuotaBytesNearExhaustion Storage pool quota usage has crossed 70%. PersistentVolumeUsageCritical Persistent Volume Claim usage has exceeded more than 85% of its capacity. PersistentVolumeUsageNearFull Persistent Volume Claim usage has exceeded more than 75% of its capacity. 6.3.1. CephClusterCriticallyFull Meaning Storage cluster utilization has crossed 80% and will become read-only at 85%. Your Ceph cluster will become read-only once utilization crosses 85%. Free up some space or expand the storage cluster immediately. It is common to see alerts related to Object Storage Device (OSD) devices full or near full prior to this alert. Impact High Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 6.3.2. CephClusterErrorState Meaning This alert reflects that the storage cluster is in ERROR state for an unacceptable amount of time and thispts the storage availability. Check for other alerts that would have triggered prior to this one and troubleshoot those alerts first. Impact Critical Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.3. CephClusterNearFull Meaning Storage cluster utilization has crossed 75% and will become read-only at 85%. Free up some space or expand the storage cluster. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 6.3.4. CephClusterReadOnly Meaning Storage cluster utilization has crossed 85% and will become read-only now. Free up some space or expand the storage cluster immediately. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 6.3.5. CephClusterWarningState Meaning This alert reflects that the storage cluster has been in a warning state for an unacceptable amount of time. While the storage operations will continue to function in this state, it is recommended to fix the errors so that the cluster does not get into an error statepting operations. Check for other alerts that might have triggered prior to this one and troubleshoot those alerts first. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.6. CephDataRecoveryTakingTooLong Meaning Data recovery is slow. Check whether all the Object Storage Devices (OSDs) are up and running. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.7. CephMdsMissingReplicas Meaning Minimum required replicas for the storage metadata service (MDS) are not available. MDS is responsible for filing metadata. Degradation of the MDS service can affect how the storage cluster works (related to the CephFS storage class) and should be fixed as soon as possible. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.8. CephMgrIsAbsent Meaning Not having a Ceph manager runningpts the monitoring of the cluster. Persistent Volume Claim (PVC) creation and deletion requests should be resolved as soon as possible. Impact High Diagnosis Verify that the rook-ceph-mgr pod is failing, and restart if necessary. If the Ceph mgr pod restart fails, follow the general pod troubleshooting to resolve the issue. Verify that the Ceph mgr pod is failing: Describe the Ceph mgr pod for more details: <pod_name> Specify the rook-ceph-mgr pod name from the step. Analyze the errors related to resource issues. Delete the pod, and wait for the pod to restart: Follow these steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.9. CephMgrIsMissingReplicas Meaning To resolve this alert, you need to determine the cause of the disappearance of the Ceph manager and restart if necessary. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.10. CephMonHighNumberOfLeaderChanges Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact Medium Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Print the logs of the affected monitor pod to gather more information about the issue: <rook-ceph-mon-X-yyyy> Specify the name of the affected monitor pod. Alternatively, use the Openshift Web console to open the logs of the affected monitor pod. More information about possible causes is reflected in the log. Perform the general pod troubleshooting steps: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.11. CephMonQuorumAtRisk Meaning Multiple MONs work together to provide redundancy. Each of the MONs keeps a copy of the metadata. The cluster is deployed with 3 MONs, and requires 2 or more MONs to be up and running for quorum and for the storage operations to run. If quorum is lost, access to data is at risk. Impact High Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Perform the following for general pod troubleshooting: pod status: pending +[ Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.12. CephMonQuorumLost Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact High Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Alternatively, perform general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.13. CephMonVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.14. CephNodeDown Meaning A node running Ceph pods is down. While storage operations will continue to function as Ceph is designed to deal with a node failure, it is recommended to resolve the issue to minimize the risk of another node going down and affecting storage functions. Impact Medium Diagnosis List all the pods that are running and failing: Important Ensure that you meet the OpenShift Data Foundation resource requirements so that the Object Storage Device (OSD) pods are scheduled on the new node. This may take a few minutes as the Ceph cluster recovers data for the failing but now recovering OSD. To watch this recovery in action, ensure that the OSD pods are correctly placed on the new worker node. Check if the OSD pods that were previously failing are now running: If the previously failing OSD pods have not been scheduled, use the describe command and check the events for reasons the pods were not rescheduled. Describe the events for the failing OSD pod: Find the one or more failing OSD pods: In the events section look for the failure reasons, such as the resources are not being met. In addition, you may use the rook-ceph-toolbox to watch the recovery. This step is optional, but is helpful for large Ceph clusters. To access the toolbox, run the following command: From the rsh command prompt, run the following, and watch for "recovery" under the io section: Determine if there are failed nodes. Get the list of worker nodes, and check for the node status: Describe the node which is of the NotReady status to get more information about the failure: Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.15. CephOSDCriticallyFull Meaning One of the Object Storage Devices (OSDs) is critically full. Expand the cluster immediately. Impact High Diagnosis Deleting data to free up storage space You can delete data, and the cluster will resolve the alert through self healing processes. Important This is only applicable to OpenShift Data Foundation clusters that are near or full but not in read-only mode. Read-only mode prevents any changes that include deleting data, that is, deletion of Persistent Volume Claim (PVC), Persistent Volume (PV) or both. Expanding the storage capacity Current storage size is less than 1 TB You must first assess the ability to expand. For every 1 TB of storage added, the cluster needs to have 3 nodes each with a minimum available 2 vCPUs and 8 GiB memory. You can increase the storage capacity to 4 TB via the add-on and the cluster will resolve the alert through self healing processes. If the minimum vCPU and memory resource requirements are not met, you need to add 3 additional worker nodes to the cluster. Mitigation If your current storage size is equal to 4 TB, contact Red Hat support. Optional: Run the following command to gather the debugging information for the Ceph cluster: 6.3.16. CephOSDDiskNotResponding Meaning A disk device is not responding. Check whether all the Object Storage Devices (OSDs) are up and running. Impact Medium Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.17. CephOSDDiskUnavailable Meaning A disk device is not accessible on one of the hosts and its corresponding Object Storage Device (OSD) is marked out by the Ceph cluster. This alert is raised when a Ceph node fails to recover within 10 minutes. Impact High Diagnosis Determine the failed node Get the list of worker nodes, and check for the node status: Describe the node which is of NotReady status to get more information on the failure: 6.3.18. CephOSDFlapping Meaning A storage daemon has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause. Impact High Diagnosis Follow the steps in the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide. Alternatively, follow the steps for general pod troublshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 6.3.19. CephOSDNearFull Meaning Utilization of back-end storage device Object Storage Device (OSD) has crossed 75% on a host. Impact High Mitigation Free up some space in the cluster, expand the storage cluster, or contact Red Hat support. For more information on scaling storage, see the Scaling storage guide . 6.3.20. CephOSDSlowOps Meaning An Object Storage Device (OSD) with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. Impact Medium Diagnosis More information about the slow requests can be obtained using the Openshift console. Access the OSD pod terminal, and run the following commands: Note The number of the OSD is seen in the pod name. For example, in rook-ceph-osd-0-5d86d4d8d4-zlqkx , <0> is the OSD. Mitigation The main causes of the OSDs having slow requests are: * Problems with the underlying hardware or infrastructure, such as, disk drives, hosts, racks, or network switches. Use the Openshift monitoring console to find the alerts or errors about cluster resources. This can give you an idea about the root cause of the slow operations in the OSD. * Problems with the network. These problems are usually connected with flapping OSDs. See the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide * If it is a network issue, escalate to the OpenShift Data Foundation team * System load. Use the Openshift console to review the metrics of the OSD pod and the node which is running the OSD. Adding or assigning more resources can be a possible solution. 6.3.21. CephOSDVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. 6.3.22. CephPGRepairTakingTooLong Meaning Self-healing operations are taking too long. Impact High Diagnosis Check for inconsistent Placement Groups (PGs), and repair them. For more information, see the Red Hat Knowledgebase solution Handle Inconsistent Placement Groups in Ceph . 6.3.23. CephPoolQuotaBytesCriticallyExhausted Meaning One or more pools has reached, or is very close to reaching, its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 6.3.24. CephPoolQuotaBytesNearExhaustion Meaning One or more pools is approaching a configured fullness threshold. One threshold that can trigger this warning condition is the mon_pool_quota_warn_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 6.3.25. PersistentVolumeUsageCritical Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 6.3.26. PersistentVolumeUsageNearFull Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 6.4. Finding the error code of an unhealthy bucket Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Object Bucket Claims tab. Look for the object bucket claims (OBCs) that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the bucket. Click the YAML tab and look for related errors around the status and mode sections of the YAML. If the OBC is in Pending state. the error might appear in the product logs. However, in this case, it is recommended to verify that all the variables provided are accurate. 6.5. Finding the error code of an unhealthy namespace store resource Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Namespace Store tab. Look for the namespace store resources that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the resource. Click the YAML tab and look for related errors around the status and mode sections of the YAML. 6.6. Recovering pods When a first node (say NODE1 ) goes to NotReady state because of some issue, the hosted pods that are using PVC with ReadWriteOnce (RWO) access mode try to move to the second node (say NODE2 ) but get stuck due to multi-attach error. In such a case, you can recover MON, OSD, and application pods by using the following steps. Procedure Power off NODE1 (from AWS or vSphere side) and ensure that NODE1 is completely down. Force delete the pods on NODE1 by using the following command: 6.7. Recovering from EBS volume detach When an OSD or MON elastic block storage (EBS) volume where the OSD disk resides is detached from the worker Amazon EC2 instance, the volume gets reattached automatically within one or two minutes. However, the OSD pod gets into a CrashLoopBackOff state. To recover and bring back the pod to Running state, you must restart the EC2 instance. 6.8. Enabling and disabling debug logs for rook-ceph-operator Enable the debug logs for the rook-ceph-operator to obtain information about failures that help in troubleshooting issues. Procedure Enabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: DEBUG parameter in the rook-ceph-operator-config yaml file to enable the debug logs for rook-ceph-operator. Now, the rook-ceph-operator logs consist of the debug information. Disabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: INFO parameter in the rook-ceph-operator-config yaml file to disable the debug logs for rook-ceph-operator. 6.9. Troubleshooting unhealthy blocklisted nodes 6.9.1. ODFRBDClientBlocked Meaning This alert indicates that an RBD client might be blocked by Ceph on a specific node within your Kubernetes cluster. The blocklisting occurs when the ocs_rbd_client_blocklisted metric reports a value of 1 for the node. Additionally, there are pods in a CreateContainerError state on the same node. The blocklisting can potentially result in the filesystem for the Persistent Volume Claims (PVCs) using RBD becoming read-only. It is crucial to investigate this alert to prevent any disruption to your storage cluster. Impact High Diagnosis The blocklisting of an RBD client can occur due to several factors, such as network or cluster slowness. In certain cases, the exclusive lock contention among three contending clients (workload, mirror daemon, and manager/scheduler) can lead to the blocklist. Mitigation Taint the blocklisted node: In Kubernetes, consider tainting the node that is blocklisted to trigger the eviction of pods to another node. This approach relies on the assumption that the unmounting/unmapping process progresses gracefully. Once the pods have been successfully evicted, the blocklisted node can be untainted, allowing the blocklist to be cleared. The pods can then be moved back to the untainted node. Reboot the blocklisted node: If tainting the node and evicting the pods do not resolve the blocklisting issue, a reboot of the blocklisted node can be attempted. This step may help alleviate any underlying issues causing the blocklist and restore normal functionality. Important Investigating and resolving the blocklist issue promptly is essential to avoid any further impact on the storage cluster. | [
"du -a <path-in-the-mon-node> |sort -n -r |head -n10",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc project openshift-storage",
"get pod | grep rook-ceph-osd",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc project openshift-storage",
"get pod | grep rook-ceph-mds",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc get pods | grep mgr",
"oc describe pods/ <pod_name>",
"oc get pods | grep mgr",
"oc project openshift-storage",
"get pod | grep rook-ceph-mgr",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc project openshift-storage",
"get pod | grep rook-ceph-mgr",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc logs <rook-ceph-mon-X-yyyy> -n openshift-storage",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc project openshift-storage",
"get pod | grep rook-ceph-mon",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions",
"[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]",
"oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"-n openshift-storage get pods",
"-n openshift-storage get pods",
"-n openshift-storage get pods | grep osd",
"-n openshift-storage describe pods/<osd_podname_ from_the_ previous step>",
"TOOLS_POD=USD(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) rsh -n openshift-storage USDTOOLS_POD",
"ceph status",
"get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'",
"describe node <node_name>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'",
"describe node <node_name>",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.13",
"ceph daemon osd.<id> ops",
"ceph daemon osd.<id> dump_historic_ops",
"oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions",
"[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]",
"oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage",
"ceph osd pool set-quota <pool> max_bytes <bytes>",
"ceph osd pool set-quota <pool> max_objects <objects>",
"ceph osd pool set-quota <pool> max_bytes <bytes>",
"ceph osd pool set-quota <pool> max_objects <objects>",
"oc delete pod <pod-name> --grace-period=0 --force",
"oc edit configmap rook-ceph-operator-config",
"... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: DEBUG",
"oc edit configmap rook-ceph-operator-config",
"... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: INFO"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/troubleshooting_openshift_data_foundation/troubleshooting-alerts-and-errors-in-openshift-data-foundation |
4.4. Boosting | 4.4. Boosting Lucene uses boosting to attach more importance to specific fields or documents over others. Lucene differentiates between index and search-time boosting. Report a bug 4.4.1. Static Index Time Boosting The @Boost annotation is used to define a static boost value for an indexed class or property. This annotation can be used within @Field , or can be specified directly on the method or class level. In the following example: the probability of Essay reaching the top of the search list will be multiplied by 1.7. @Field.boost and @Boost on a property are cumulative, therefore the summary field will be 3.0 (2 x 1.5), and more important than the ISBN field. The text field is 1.2 times more important than the ISBN field. Example 4.7. Different ways of using @Boost Report a bug 4.4.2. Dynamic Index Time Boosting The @Boost annotation defines a static boost factor that is independent of the state of the indexed entity at runtime. However, in some cases the boost factor may depend on the actual state of the entity. In this case, use the @DynamicBoost annotation together with an accompanying custom BoostStrategy . @Boost and @DynamicBoost annotations can both be used in relation to an entity, and all defined boost factors are cumulative. The @DynamicBoost can be placed at either class or field level. In the following example, a dynamic boost is defined on class level specifying VIPBoostStrategy as implementation of the BoostStrategy interface used at indexing time. Depending on the annotation placement, either the whole entity is passed to the defineBoost method or only the annotated field/property value. The passed object must be cast to the correct type. Example 4.8. Dynamic boost example In the provided example all indexed values of a VIP would be twice the importance of the values of a non-VIP. Note The specified BoostStrategy implementation must define a public no argument constructor. Report a bug | [
"@Indexed @Boost(1.7f) public class Essay { @Field(name = \"Abstract\", store=Store.YES, boost = @Boost(2f)) @Boost(1.5f) public String getSummary() { return summary; } @Field(boost = @Boost(1.2f)) public String getText() { return text; } @Field public String getISBN() { return isbn; } }",
"public enum PersonType { NORMAL, VIP } @Indexed @DynamicBoost(impl = VIPBoostStrategy.class) public class Person { private PersonType type; } public class VIPBoostStrategy implements BoostStrategy { public float defineBoost(Object value) { Person person = (Person) value; if (person.getType().equals(PersonType.VIP)) { return 2.0f; } else { return 1.0f; } } }"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-boosting |
Chapter 9. Troubleshooting Ansible content with automation content navigator | Chapter 9. Troubleshooting Ansible content with automation content navigator As a content creator, you can troubleshoot your Ansible content (collections, automation execution environments, and playbooks) with automation content navigator and interactively troubleshoot the playbook. You can also compare results inside or outside an automation execution environment and troubleshoot any problems. 9.1. Reviewing playbook results with an automation content navigator artifact file Automation content navigator saves the results of the playbook run in a JSON artifact file. You can use this file to share the playbook results with someone else, save it for security or compliance reasons, or review and troubleshoot later. You only need the artifact file to review the playbook run. You do not need access to the playbook itself or inventory access. Prerequisites A automation content navigator artifact JSON file from a playbook run. Procedure Start automation content navigator with the artifact file. USD ansible-navigator replay simple_playbook_artifact.json Review the playbook results that match when the playbook ran. You can now type the number to the plays and tasks to step into each to review the results, as you would after executing the playbook. Additional resources ansible-playbook Ansible playbooks 9.2. Frequently asked questions about automation content navigator Use the following automation content navigator FAQ to help you troubleshoot problems in your environment. Where should the ansible.cfg file go when using an automation execution environment? The easiest place to have the ansible.cfg file is in the project directory to the playbook. The playbook directory is automatically mounted in the automation execution environment and automation content navigator finds the ansible.cfg file there. If the ansible.cfg file is in another directory, set the ANSIBLE_CONFIG variable, and specify the directory as a custom volume mount. (See automation content navigator settings for execution-environment-volume-mounts ) Where should the ansible.cfg file go when not using an automation execution environment? Ansible looks for the ansible.cfg in the typical locations when not using an automation execution environment. See Ansible configuration settings for details. Where should Ansible collections be placed when using an automation execution environment? The easiest place to have Ansible collections is in the project directory, in a playbook adjacent collections directory (for example, ansible-galaxy collections install ansible.utils -p ./collections ). The playbook directory is automatically mounted in the automation execution environment and automation content navigator finds the collections there. Another option is to build the collections into an automation execution environment using Ansible Builder. This helps content creators author playbooks that are production ready, since automation controller supports playbook adjacent collection directories. If the collections are in another directory, set the ANSIBLE_COLLECTIONS_PATHS variable and configure a custom volume mount for the directory. (See Automation content navigator general settings for execution-environment-volume-mounts ). Where should Ansible collections be placed when not using an automation execution environment? When not using an automation execution environment, Ansible looks in the default locations for collections. See the Using Ansible collections guide. Why does the playbook hang when vars_prompt or pause/prompt is used? By default, automation content navigator runs the playbook in the same manner that automation controller runs the playbook. This helps content creators author playbooks that are production ready. If you cannot avoid the use of vars_prompt or pause\prompt , disabling playbook-artifact creation causes automation content navigator to run the playbook in a manner that is compatible with ansible-playbook and allows for user interaction. Why does automation content navigator change the terminal colors or look terrible? Automation content navigator queries the terminal for its OSC4 compatibility. OSC4, 10, 11, 104, 110, 111 indicate the terminal supports color changing and reverting. It is possible that the terminal is misrepresenting its ability. You can disable OSC4 detection by setting --osc4 false . (See Automation content navigator general settings for how to handle this with an environment variable or in the settings file). How can I change the colors used by automation content navigator? Use --osc4 false to force automation content navigator to use the terminal defined colors. (See Automation content navigator general settings for how to handle this with an environment variable or in the settings file). What is with all these site-artifact-2021-06-02T16:02:33.911259+00:00.json files in the playbook directory? Automation content navigator creates a playbook artifact for every playbook run. These can be helpful for reviewing the outcome of automation after it is complete, sharing and troubleshooting with a colleague, or keeping for compliance or change-control purposes. The playbook artifact file has the detailed information about every play and task, and the stdout from the playbook run. You can review playbook artifacts with ansible-navigator replay <filename> or :replay <filename> while in an automation content navigator session. You can review all playbook artifacts with both --mode stdout and --mode interactive , depending on the required view. You can disable playbook artifacts writing and the default file naming convention. (See Automation content navigator general settings for how to handle this with an environment variable or in the settings file). Why does vi open when I use :open ? Automation content navigator opens anything showing in the terminal in the default editor. The default is set to either vi +{line_number} {filename} or the current value of the EDITOR environment variable. Related to this is the editor-console setting which indicates if the editor is console or terminal based. Here are examples of alternate settings that might be useful: # emacs ansible-navigator: editor: command: emacs -nw +{line_number} {filename} console: true # vscode ansible-navigator: editor: command: code -g {filename}:{line_number} console: false #pycharm ansible-navigator: editor: command: charm --line {line_number} {filename} console: false What is the order in which configuration settings are applied? The automation content navigator configuration system pulls in settings from various sources and applies them hierarchically in the following order (where the last applied changes are the most prevalent): Default internal values Values from a settings file Values from environment variables Flags and arguments specified on the command line While issuing : commands within the text-based user interface Something did not work, how can I troubleshoot it? Automation content navigator has reasonable logging messages. You can enable debug logging with --log-level debug . If you think you might have found a bug, log an issue and include the details from the log file. | [
"ansible-navigator replay simple_playbook_artifact.json",
"emacs ansible-navigator: editor: command: emacs -nw +{line_number} {filename} console: true",
"vscode ansible-navigator: editor: command: code -g {filename}:{line_number} console: false",
"#pycharm ansible-navigator: editor: command: charm --line {line_number} {filename} console: false"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_content_navigator_creator_guide/assembly-troubleshooting-navigator_ansible-navigator |
27.2. Types | 27.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with Red Hat Gluster Storage. Different types allow you to configure flexible access: Process types glusterd_t The Gluster processes are associated with the glusterd_t SELinux type. Types on executables glusterd_initrc_exec_t The SELinux-specific script type context for the Gluster init script files. glusterd_exec_t The SELinux-specific executable type context for the Gluster executable files. Port Types gluster_port_t This type is defined for glusterd . By default, glusterd uses 204007-24027, and 38465-38469 TCP ports. File Contexts glusterd_brick_t This type is used for files threated as glusterd brick data. glusterd_conf_t This type is associated with the glusterd configuration data, usually stored in the /etc directory. glusterd_log_t Files with this type are treated as glusterd log data, usually stored under the /var/log/ directory. glusterd_tmp_t This type is used for storing the glusterd temporary files in the /tmp directory. glusterd_var_lib_t This type allows storing the glusterd files in the /var/lib/ directory. glusterd_var_run_t This type allows storing the glusterd files in the /run/ or /var/run/ directory. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-glusterfs-types |
Chapter 2. Preparing for migration to JBoss EAP 8.0 | Chapter 2. Preparing for migration to JBoss EAP 8.0 As a system administrator, you need to plan the migration to JBoss EAP 8.0. This upgrade is essential for improved performance, enhanced security, and increased stability of Java applications. JBoss EAP 8.0 provides backward compatibility for JBoss EAP 7 applications. However, if your application uses features that JBoss EAP 8.0 has deprecated or removed, you might need to modify your application code. The JBoss EAP 8.0 release introduces several changes that might impact your application deployment. To ensure a successful migration, conduct research and planning before attempting to migrate your application. Before beginning the migration process, follow these initial steps: Familiarize yourself with the features of Jakarta EE 10 . Review features of JBoss EAP 8.0 . Review the JBoss EAP getting started material . Ensure a seamless migration process by backing up your data and reviewing server state . Streamline your installation process by migrating JBoss EAP with RPM installation . Improve manageability and automation by migrating JBoss EAP as a service . After becoming familiar with the feature changes, the development materials, and the tools that can assist your migration efforts, evaluate your applications and server configuration to determine the necessary changes for running them on JBoss EAP 8.0. 2.1. Review the Jakarta EE 10 features Jakarta EE 10 introduces numerous enhancements that simplify the development and deployment of feature-rich applications in both private and public clouds. It incorporates new features and the latest standards such as HTML5, WebSocket, JSON, Batch, and Concurrency Utilities. Updates include Jakarta Persistence 3.1, Jakarta RESTful Web Services 3.1, Jakarta Servlet 6.0, Jakarta Expression Language 5.0, Java Message Service 3.1. Jakarta Server Faces 4.0, Jakarta Enterprise Beans 4.0, Contexts and Dependency Injection 2.0, and Jakarta Bean Validation 3.0. Additional resources Jakarta EE Platform 10 2.2. Review the features of JBoss EAP 8.0 JBoss EAP 8.0 includes upgrades and improvements over releases. For the complete list of new features introduced in JBoss EAP 8.0, see New features and enhancements in the Release notes for Red Hat JBoss Enterprise Application Platform 8.0 on the Red Hat Customer Portal. Before migrating your application to JBoss EAP 8.0, note that some features from releases may no longer be supported or have been deprecated due to high maintenance costs, low community interest, or availability of better alternatives. For a complete list of deprecated and unsupported features in JBoss EAP 8.0, see Unsupported, deprecated, and removed functionality in the Release notes for Red Hat JBoss Enterprise Application Platform 8.0 on the Red Hat Customer Portal. 2.3. Review the JBoss EAP Getting Started material This section explains the key components of the JBoss EAP Getting Started material, providing a concise overview of essential information to help you start with JBoss EAP. Review the JBoss EAP Getting Started Guide for essential information on: Downloading and installing JBoss EAP 8.0 to set up your environment effectively. Downloading and installing JBoss Tools to improve your development environment. Important JBoss Tools is a community project and is not supported by Red Hat. Please reference the community website for assistance with setting up and running your instance of JBoss Tools. To download JBoss Tools, see JBoss Tools Downloads . Configuring Maven for your development environment and managing project dependencies. Downloading and running the quick-start example applications that come with the product. Additional resources Developing applications using JBoss EAP 2.4. Back up your data and review server state This section emphasizes the need to back up data, review server state, and handle potential issues before migrating your application. By safeguarding deployments, managing open transactions, and assessing timer data, you can ensure a smooth migration. Consider the following potential issues before you start the migration: The migration process might remove temporary folders. Make sure you backup any deployments within the data/content/ directory before migrating. Later, restore the data after completion to avoid server failure due to missing content. Before migration, handle open transactions and delete the data/tx-object-store/ transaction directory. Review the persistent timer data in data/timer-service-data before proceeding with the migration to determine its applicability post-upgrade. Before the upgrade, check the deployment-* files in that directory to identify which timers are still in use. Make sure to back up the current server configuration and applications before you start the migration. 2.5. Migrate JBoss EAP with RPM installation The migration advice in this guide also applies to migrating RPM installations of JBoss EAP, but you might need to alter some steps, such as how to start JBoss EAP to suit an RPM installation compared to an archive or the jboss-eap-installation-manager installation. Important It is not supported to have more than one RPM-installed instance of JBoss EAP on a single Red Hat Enterprise Linux server. Therefore, it is recommended to migrate the JBoss EAP installation to a new machine when migrating to JBoss EAP 8.0. Additional resources Installing JBoss EAP by using the RPM installation method 2.6. Migrate JBoss EAP as a service If you run JBoss EAP 7 as a service, review the updated configuration instructions for JBoss EAP 8.0 in the Red Hat JBoss Enterprise Application Platform Installation Methods . 2.7. Migrate a cluster If you run a JBoss EAP cluster, follow the instruction in the Upgrading a cluster section in the JBoss EAP 7.4 Patching and Upgrading Guide . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/migration_guide/preparing-for-migration-to-jboss-eap-8_default |
Automating SAP HANA Scale-Out System Replication using the RHEL HA Add-On | Automating SAP HANA Scale-Out System Replication using the RHEL HA Add-On Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.